GeoTransformRules - xlgames-inc/XLE GitHub Wiki
#3D vector math regime
3D geometric transformations are common in games development. But there's some ambiguity about how we should use the math multiplication operation to perform 3d transformations. Sometimes this is described as row major vs column major order. There are some other areas of ambiguity like this, as well. Often there are multiple equally good options. But any good engine needs to select one and stick to it.
Here, I describe the rules for 3D transformations in XLE, and the rationalisation behind them.
Click here to skip to the conclusion!
[TOC]
##What is the best regime for 3D geometry maths?
When writing new 3D math code, there are many basic rules we need to consider. For example:
- Which direction is up in world space? (+Y axis or +Z axis)
- Should matrices be stored in row-major or column-major order
- Are vectors column matrices or row matrices?
- Do we use
vector * matrix
ormatrix * vector
to transform 3d vectors? - (and many others)
There are no strict standards for these types of things. We can basically chose to do things in whichever way we prefer. But some people strongly prefer some particular way. Sometimes these questions are summed up as row-major vs column-major maths. Or OpenGL method vs DirectX method. But as we’ll see, that is an over-simplification, and a little misleading.
I’ll start with some really basic stuff; and work forward to the more complex stuff.
Some of this might seem a little basic. But this stuff can get confusing very easily; it's worth understanding in detail.
##Fundamental transformations
So, here's our most basic definition: – fundamental 3D transformations convert points from one coordinate space to another coordinate space.
Most commonly, we want to convert from the local space of a model to world space. When we load a model from disk, all of its coordinates are defined as relative to some center point of the model – this is “local coordinate space.”
But we also have some idea of a natural coordinate space of the entire universe, called “world space”. In world space, all coordinates are defined relative to the center of the game universe. This allows us to describe the relative positions of models placed in different places in the world.
Most coordinate systems are “affine” – so let’s focus on them for now. We can convert between any affine 3D coordinate space A to any affine 3D coordinate space B using a 4x4 transformation matrix. Most people are probably familiar with this, but let’s write an algorithm.
Let M = transformation matrix from space A to space B Let pB = 3D point in coordinate space B Let pA = the same 3D point as pB, but in coordinate space A
Then,
pB = TransformPoint(M, pB);
So, we probably know that “TransformPoint” can be implemented using a matrix multiply. But if you remember back to math class, we can’t multiply a 3D point by a 4x4 matrix. Matrix multiply operations must take the form:
[M x N] = [M x A] * [A x N]
So, [3] = [4x4]*[3] is not defined! I've seen engines that will actually define operator*
for 4x4 matrices and 3d vectors. This can create some confusion, because this is not a true mathematical operator.
To make a mathematically correct operation, we need to do 2 things… First, we have to add an imaginary “w” component to the point to make a 4D vector (ie, point [x,y,z] becomes [x,y,z,1]).
Now, we have 2 options (and only 2 options) for multiplying a 4D vector and a 4x4 matrix:
We can consider vectors to be “column vectors” – ie, matrices that are only 1 row wide:
[4 x 1] = [4 x 4] * [4 x 1] (vector = matrix * vector)
Or, we can consider vectors to be “row vectors” – ie, matrices that are only 1 column high:
[1 x 4] = [1 x 4] * [4 x 4] (vector = vector * matrix)
Notice from the algorithm that the order of multiplication must change. This is particularly important for shaders – consider the following HLSL code:
float4 worldSpace = mul(LocalToWorld, float4(pt, 1)); // column vector multiplication order
float4 worldSpace = mul(float4(pt, 1), LocalToWorld); // row vector multiplication order
Both cases will compile, and both are mathematically valid. But the result will be very different!
##Left to right ordering vs right to left ordering
Ordering is also important when multiplying through multiple matrices.
For example, if we want to make a transform matrix that will transform from local to camera space, will need to combine together 2 matrices:
LocalToCamera = Combine(LocalToWorld, WorldToCamera)
So, in column vector order (here, operator* is the matrix multiply operator):
Vector4 result = WorldToCamera_Matrix4x4 * (LocalToWorld_Matrix4x4 * localSpace_Vector4);
But in row vector order:
Vector4 result = (localSpace_Vector4 * LocalToWorld_Matrix4x4) * WorldToCamera_Matrix4x4;
The column vector order is sometimes called “right to left” order. Because the transformation seems to go in a right-to-left way. We start in local space, transform local to world, then transform world to camera. Right to left.
Likewise, row vector order is left to right.
Critically, this affects the order of multiplying matrices. Notice that the order of the matrices is different in the above lines. Basic matrix associative multiplication rules tells us we combine LocalToWorld_Matrix4x4 and WorldToCamera_Matrix4x4 beforehand.
So, in column vector order:
Matrix4x4 localToCamera = WorldToCamera_Matrix4x4 * LocalToWorld_Matrix4x4;
Vector4 result = localToCamera * localSpace_Vector4;
Or in row vector order:
Matrix4x4 localToCamera = LocalToWorld_Matrix4x4 * WorldToCamera_Matrix4x4;
Vector4 result = localSpace_Vector4 * localToCamera;
We can describe this with an algorithm, “Combine”.
When using column vectors:
Combine(firstTransform, secondTransform) = secondTransform * firstTransform;
When using row vectors:
Combine(firstTransform, secondTransform) = firstTransform * secondTransform;
So, when we want to combine 2 transforms, rather than using the matrix multiplication operator, let's use an algorithm called "Combine". That algorithm is implemented in terms of the matrix multiplication operation, but it has a more specific purpose.
##First decisions
So, this sets up our first decisions:
- Should we consider vectors to be “column vectors” or “row vectors”?
- And (related) should we use right-to-left or left-to-right ordering for geometric transforms?
##Matrix i’s and j’s
Let’s make a simple but important rule.
In mathematic notation, we describe a specific element in a matrix with the familiar i, j notation.
Here, i is a column index, and j is a row index.
If we have a 2D matrix class, Matrix4x4, then let’s make this rule:
- If there is an
operator()
oroperator[]
, then it should always be column, then row order (like mathematical notation)
In math we typically start from “1” – but that’s a little inconvenient for code. So, let's use zero based i and j coordinates.
In other words:
It sounds simple, but some math packages flip the order of i and j... But that just makes things way too confusing. So let’s make it clear – the order in operator() or operator[] is the mathematical order, row then column.
##Matrix layout
Let’s look at the algorithm for [4 x 1] = [4 x 4] * [4 x 1]
.
Below, “lhs” is a transformation matrix; and “rhs” is a column vector.
result(0) = lhs(0,0) * rhs(0) + lhs(0,1) * rhs(1) + lhs(0,2) * rhs(2) + lhs(0,3) * rhs(3);
result(1) = lhs(1,0) * rhs(0) + lhs(1,1) * rhs(1) + lhs(1,2) * rhs(2) + lhs(1,3) * rhs(3);
result(2) = lhs(2,0) * rhs(0) + lhs(2,1) * rhs(1) + lhs(2,2) * rhs(2) + lhs(2,3) * rhs(3);
result(3) = lhs(3,0) * rhs(0) + lhs(3,1) * rhs(1) + lhs(3,2) * rhs(2) + lhs(3,3) * rhs(3);
For points, rhs(3) is always 1.f (by our definition above). So we can easily see that “lhs(0,3), lhs(1,3), lhs(2,3)” is the translation part of this matrix.
Let’s look at the algorithm for [1 x 4] = [1 x 4][4 x 4]
:
result(0,0) = lhs(0) * rhs(0,0) + lhs(1) * rhs(1,0) + lhs(2) * rhs(2,0) + lhs(3) * rhs(3,0);
result(0,1) = lhs(0) * rhs(0,1) + lhs(1) * rhs(1,1) + lhs(2) * rhs(2,1) + lhs(3) * rhs(3,1);
result(0,2) = lhs(0) * rhs(0,2) + lhs(1) * rhs(1,2) + lhs(2) * rhs(2,2) + lhs(3) * rhs(3,2);
result(0,3) = lhs(0) * rhs(0,3) + lhs(1) * rhs(1,3) + lhs(2) * rhs(2,3) + lhs(3) * rhs(3,3);
Here, lhs(3) is always 1.f. So “rhs(3,0), rhs(3,1), rhs(3,2)” is the translation part of this matrix.
So, our left-to-right or right-to-left order changes the layout of the matrix. The translation part has moved into a different part of the matrix. In fact, it’s a transpose relationship.
columnVector = matrix * columnVector
is the same as:
rowVector = rowVector * transpose(matrix)
##Vectorization implications
Let’s look closely at the first line of the algorithm for [4 x 1] = [4 x 4] * [4 x 1]
result(0) = lhs(0,0) * rhs(0) + lhs(0,1) * rhs(1) + lhs(0,2) * rhs(2) + lhs(0,3) * rhs(3);
We can rewrite this as the dot product of two 4D vectors:
result(0) = [lhs(0,0), lhs(0,1), lhs(0,2), lhs(0,3)] dot [rhs(0), rhs(1), rhs(2), rhs(3)]
result(1) and result(2) can also be written as dot products (result(3) isn’t needed, the result will always be 1.f). So here, vector transformation can be implemented as 3 dot products.
Or, there’s another way to look at it. Let’s look at the whole [4 x 1] = [4 x 4] * [4 x 1]
algorithm again. It can be written another way:
result = [lhs(0,0), lhs(1,0), lhs(2,0)] * rhs(0)
+ [lhs(0,1), lhs(1,1), lhs(2,1)] * rhs(1)
+ [lhs(0,2), lhs(1,2), lhs(2,2)] * rhs(2)
+ [lhs(0,3), lhs(1,3), lhs(2,3)]
;
This method doesn’t require dot products. Instead we must scale vectors by a scalar, and add them together.
So, there are 2 vector-based ways to implement matrix transform:
- “Dot-product” method
- “Multiply-add” method
When using a SIMD instruction set[^SIMD], we should use one of these 2 methods.
[^SIMD]:Single instruction, multiple data. A single instruction can perform the same operation on multiple registers, simultaneously. Usually vector instruction set (like shader code, SSE, etc).
If you look closely, you’ll see that the dot product method uses vectors like [lhs(0,0), lhs(0,1), lhs(0,2), lhs(0,3)]
(ie, rows from the original matrix.)
But the multiply-add uses vectors like [lhs(0,0), lhs(1,0), lhs(2,0)]
(ie, columns from the original matrix).
When using vector instructions, often more convenient for each vector to be contiguous in memory. Let me put that another way:
- if
[lhs(0,0), lhs(0,1), lhs(0,2), lhs(0,3)]
is contiguous in memory, use the dot product method for transformation - if
[lhs(0,0), lhs(1,0), lhs(2,0)]
is contiguous in memory, use the multiply-add method for transformation
Some notes:
- Most modern GPU hardware can do both dot product and multiply-add methods
- In many cases, there doesn’t seem to be a large difference in performance between the two
- However, the dot product method is 3 instructions for an affine transform
- The multiply-add method is 4 instructions or affine or non-affine transform
- Most CPUs can’t do the dot product method
- SSE didn’t get a dot product instruction until SSE4 (approx. 2007/2008 processors)
- So most CPUs are better than at the multiply add approach
- For CPUs without a SIMD instruction set, it probably doesn’t matter
##Constant part of the matrix
Let’s consider the column vector case again. We found that the translation part of the matrix is in lhs(0,3), lhs(1,3), lhs(2,3)
. We can go a little further:
Here, I’m restricting us to affine transformation matrices. Hopefully the above image it clear – R is the rotation and scale part of the matrix, T is the translation part.
But part of the matrix is [0,0,0,1]
. It must always be [0,0,0,1]
. If we were to find the complete set of all matrices that transform between 3D coordinate spaces that are useful to us; we’ll find that they all have [0,0,0,1]
in that part.
So, since we know those values are constant, we can just remove that part. We still get mathematically well defined cases:
Column vectors:
[3 x 1] = [3 x 4] * [4 x 1]
Row vectors:
[1 x 3] = [1 x 4] * [4 x 3]
It’s a little weird because the input vectors have 4 elements, and but the output vectors have 3. There are some other weirdness, also.
We can’t invert or take the determinant of a 3x4 or 4x3 matrix. But, we can for a 4x4 matrix with an assumed [0,0,0,1]
part. 3x4 matrices and 4x3 matrices have no inverse.
But all affine geometric transforms should another affline geometric transform that will undo it. This geometric transform inverse should also itself be an affine geometry transforms.
We can’t even multiply two 4x3 matrices or two 3x4 matrices. Matrix multiplication is only defined for the [M x N] = [M x A] * [A x N]
case. [3x4]*[3x4]
is undefined!
So; it’s all a little weird. To avoid these problems, let's look at it in a different way:
- There are no 3x4 or 4x3 matrix for 3D transforms. All 3D transforms are represented by 4x4 matrices.
- But sometimes we don’t store or calculate the
(0,0,0,1)
part – we just assume it’s there
But note that we still need full 4x4 matrices for some transformations! We use a non-affline transformation for the projection matrix.
##Matrix layout row/column major
Obviously 4x4 matrix can be stored in memory as 16 floats. But how should we order those 16 float values? There are 2 possibilities:
Method 1 (called row-major order):
mat(0,0); mat(0,1); mat(0,2); mat(0,3); mat(1,0); mat(1,1); mat(1,2); mat(1,3); mat(2,0); mat(2,1); mat(2,2); mat(2,3); mat(3,0); mat(3,1); mat(3,2); mat(3,3)
Method 2 (called column-major order):
mat(0,0); mat(1,0); mat(2,0); mat(3,0); mat(0,1); mat(1,1); mat(2,1); mat(3,1); mat(0,2); mat(1,2); mat(2,2); mat(3,2); mat(0,3); mat(1,3); mat(2,3); mat(3,3);
Notice that this ordering doesn’t affect any algorithms or usage patterns. All the patterns for multiplication and transformation are just the same – because all we’re doing is changing the layout of the values in memory.
"row-major" and "column-major" order are commonly used terms – but actually this is one of the least important rules described here. It’s only important for the internal behaviour of the matrix maths class.
But there are some important things to consider, most related to GPUs.
When passing matrices to shaders, we always send 4D vectors (never 3D vectors). This is because GPU shader constants can only be 4D vectors (ignoring integer and boolean behaviour). So, if we want to send the full 16 floats, it’s pretty easy – just 4 vectors.
But for matrices with an assumed [0,0,0,1]
part, we actually only store 12 floats in memory. So ideally we want to send 3 vectors, and we want each vector to start on a 128 bit alignment boundary.
This is much easier if assumed [0,0,0,1]
part is the last part of the matrix in memory – ie [mat(3,0); mat(3,1); mat(3,2); mat(3,3)]
in row-major order, or [mat(0,3); mat(1,3); mat(2,3); mat(3,3);]
in column-major order.
That means:
- Column vector cases work better with row-major order
- Row vector cases work better with column-major order
Here's where it starts to get confusing. If you look closely, you’ll see that (in both cases) the matrices end up aligned in memory is a way that is better suited to the dot-product method of multiplication.
This introduces a CPU vs GPU problem. Because the multiply-add methods are better for older CPUs, but the dot-product methods are better for GPUs.
But we need to consider these things:
- The GPU will probably do 99% of all vector transforms, the CPU many fewer
- The CPU does a lot of matrix by matrix multiplies – but none of this matters for operator*(Matrix4x4, Matrix4x4).
- If we choose to be GPU-friendly, it could be a big improvement for GPU performance, but only a small decrease for CPU performance
- But if we chose to be CPU-friendly, it could be a big decreases for GPU performance, but only a small improvement for CPU performance
- Newer CPUs can do the dot-product method efficiently, anyway
So the sensible option is usually to pick a method that is best for the GPU.
##Basis vectors within a matrix
If we go with the dot-product method, it means our affline transformation matrix will appear in memory like this:
[offset 0] Ix; Jx; Kx; Tx;
[offset 16] Iy; Jy; Ky; Ty;
[offset 32] Iz; Jz; Kz; Tz;
It doesn’t matter if it’s row-major or column-major order. If we’re using dot products, that’s how the matrix appears in memory.
Let’s imagine this transform is a local-to-world transform. In this case I, J and K are the basis axes for local space. I is the direction of local space +X, J is +Y and K is +Z.
This is one disadvantage of this method. Often, it’s useful to extract the basis vectors from a local-to-world (mostly because we want to know the forward direction). However, these vectors are not contiguous in memory. That can be a slight disadvantage for some people; and tends to affect AI programmers the most.
Usually the best way to solve that is by implementing a StridedVector
class that operators like a vector, but allows a stride value for the distance between the components. So constructs like Transform.Up() = Vector3(0, 0, 1);
can still work fine.
##Our options
So 3x4 matrices and modern GPUs have imposed some restrictions on which options are sensible. We can reject 2 possibilities – (column vectors with column-major order matrices, and row vectors with row-major order matrices – these are the 2 “multiply-add” options). That leaves us with only 2 options.
So far, the decision that affects the most clients is “left-to-right” or “right-to-left” multiplication order. So, we can use that as a basis.
Right-to-left multiplication order
- Vectors are “column vectors”
- Matrices are stored in row-major order
- Dot product method of multiplication
Left-to-right multiplication order
- Vectors are “row vectors”
- Matrices are stored in column-major order
- Dot product method of multiplication
Both methods are equally correct, and can be equally efficient on most hardware (though sometimes a device, like the PS3, will have extra restrictions). Right-to-left seems to be more common method, and the one I’m personally most familiar with.
But left-to-right feels more natural when reading in English. Compare left-to-right:
cameraSpacePoint = localSpacePoint * localToWorld * worldToCamera;
To right-to-left:
cameraSpacePoint = worldToCamera * localToWorld * localSpacePoint;
##One step back... into the abstract
For the most part, we should abstract clients from this decision. We can do this easily, by creating a geometric transforms library. To do this, we need to separate the matrix math from the geometry math parts.
I mentioned earlier about 3x4 transform matrices being a 4x4 matrix with an assumed [0,0,0,1]
. This means that these 3x4 transforms aren’t really matrices at all. They are “geometric transforms” that are implemented via matrix math.
There are other forms of geometric transforms as well:
- Quaternion & scale & translation
- Euler angles & scale & translation
- Dual quaternions
- rotations about an axis
- (etc)
In other words, there are many mathematical objects that can be used to represent a geometric transformation. What we want is a set of concepts that apply to geometric transforms, not matrix math.
The simplest concept is “combining two transforms”. Given an A2B_Transform, and a B2C_Transform, produce an A2C_Transform. For example:
template<typename Transform>
Transform Combine(const Transform& firstTransform, const Transform& secondTransform);
template<typename Transform>
void Combine_InPlace(const Transform& firstTransform, Transform& secondTransform);
In the case of transforms that are represented by matrices, Combine()
can return either firstTransform * secondTransform
, or secondTransform * firstTransform
(depending how we've defined our ordering).
We can also implement TransformPoint()
and TransformVector()
as appropriate. And also, GetForward()
, GetUp()
, etc.
So the client programmer isn't thinking about matrix math concepts. Instead, we think in terms of geometric transform concepts. This insulates the client from the multiplication order (and all the related decisions). Those decisions affect the way that Combine()
is implemented, but the interface stays the same.
Another way to say this is:
- If we have a
Matrix
class, that class should represent only a pure mathematical matrix - It should only implement functionality that is defined by basic matrix math
- (inversion, eigen vectors, determinates, etc)
- If we have a
Matrix3x4
orMatrix4x3
, that can't have inversion or determinates - Because those operations aren’t defined for a mathematical 3x4 or 4x3 matrix
- But, we can have a
Transform
class, or a set ofTransform
functions - These might be implemented by matrix math
- But they define functionality specific to geometric transforms
We can do a similar thing in shaders. But, as it turns out, HLSL is pretty flexible for this stuff.
##Basic coordinate axes
Ok, so let’s talk a little about how to define a coordinate space. We have a lot of choices, many of which are well understood:
We can freely choose whether to use left handed or right handed coordinate spaces for world and local spaces. Neither DirectX nor OpenGL impose any restrictions on how we define these coordinates (regardless of sometimes popular opinion). The only restriction either API impose is on post-projection homogenous clip space (or NDC space). In this case, both APIs are actually left-handed.
It can be really frustrating if we have to flip coordinate systems when exporting from tools. Sometimes it’s not entirely clear where the flip should go.
- On the raw geometry itself?
- Then we have to flip the winding order also
- On the localToModel transform?
- Then we have to handle the flip when applying any animation
- What if we would otherwise just have a identity localToModel?
So, we can save ourselves some hassle by trying to pick a coordinate system that matches as many of our tools as possible. I’ve worked on many games where the in-game coordinate system was different from the coordinate system that the artists use for authoring data – and it’s the kind of problem that just gets more difficult as time goes on.
Here is the natural coordinates for several tools:
####Max natural coordinates:
View space (ie camera space)
- Right handed
- +X to the right
- +Y up
- -Z into the screen
World space
- Right handed
- +X right
- +Y forward
- +Z up
Local object space
- Right handed
- (probably either +X or +Y is forward)
- +Z up
####Collada natural coordinates
View space
- Right handed
- +X to the right
- +Y up
- -Z into the screen
World space
- Left handed (?)
- Configurable up (defaults to +Y)
####Blender coordinates
World space
- Right handed
- +Z is up
####Unity engine coordinates
World space
- Left handed
- +Y is up
####Unreal engine coordinates
World space
- Right handed
- +Z is up
####CryEdit / Crytek Sandbox
World space
- Right handed
- +Z is up
View space
- Right handed
- +X is right
- +Y is forward
- +Z is up
##Decoding all these coordinate systems
All of those coordinate system are a little overwhelming. But not everything is critical. The most import decisions to make are simple:
- Right handed or left handed coordinates?
- Which direction is up direction in world space
- Which direction is forward in local space
- For example: if we built a car, what is the forward direction it should drive along in local space?
It seems that a right handed coordinate system is probably most logical. For up, the two options are +Y or +Z (and +Z is probably slightly more standard)
For forward in local space, we have more choices. But, depending on which we choose, one of the other basis vectors will end up being either left or right. It’s natural to pick a system that will match world space in this way.
So, if +Y is up is world space, then either:
- +X is left, +Z is forward
- or +X is forward, +Z is right
But if +Z is up, then either:
- +X is right and +Y is forward
- or +X is forward, and +Z is left
Some people think that it’s more natural to have one of the basis vectors being “right”, rather than “left”. It would also seem slightly weird to have +Z being either left or right (since in almost all coordinate system +Z is either up, forward or backward).
That kind of suggests that the Max world coordinates / Crytek view coordinates are the most natural:
- +X right
- +Y forward
- +Z up
But, we might want a special case for cameras (the Max/OpenGL defaults):
- +X to the right
- -Z into the screen
- +Y up
This special case gives us a natural-feeling XY plane in view space, and matches some common standards.
##Rotation about axis
Another decision to make it how to handle rotations. Usually we want to express a rotation about an axis as an axis and an angle value.
But, if we imagine ourselves as a viewer looking around the positive direction of the given axis, should the rotation go clockwise or counterclockwise?
It seems that the most common standard for this comes from the old OpenGL Red Book. If we’re looking down the direction of the axis, then the rotation should happen counterclockwise (assuming right handed coordinate system).
##Euler angles order
Another standard is the order of combining Euler angles. Euler angles are 3 angles that can be combined to represent any rotation in 3d space. But the exact meaning of those 3 angles differs greatly from content package to content package.
This is a difficult one; but fortunately mostly only animation programmers need to think about this one.
Wikipedia lists 12 possibilities:
Classic euler angles (z-x-z, x-y-x, y-z-y, z-y-z, x-z-x, y-x-y)
Tait–Bryan angles (x-y-z, y-z-x, z-x-y, x-z-y, z-y-x, y-x-z)
Most graphics tools support multiple different types. But there are sometimes some tricks to supporting them – for example, sometimes the numbers represent rotations around static axis, while other times they are rotations around previously rotated axes.
##Conclusion##
Let me try to sum up. Every engine needs to build a regime related these things, but many engines can end up with very slightly different solutions. Sometimes the best method depends on the hardware involved.
There are many viable possibilities. The only really important rule is to try to understand all the implication at the start; and to build in some flexibility so that we can change what we want, if we ever need to.
Here is the regime used by XLE:
###Matrix layout
There are 2 reasonable choices for matrix operations:
Right-to-left multiplication order
- Vectors are “column vectors”
- Matrices are stored in row-major order
- Dot product method of multiplication
Left-to-right multiplication order
- Vectors are “row vectors”
- Matrices are stored in column-major order
- Dot product method of multiplication
XLE uses the right-to-left method exclusively.
However, this is mostly hidden behind an geometric transform abstraction, so client code should mostly not be affected by the multiplication order.
###Coordinate basis
There are many options for defining local and world space coordinate systems, but it seems like right handed systems are more common.
From there, best coordinate system is the one that most closely matches what the artists use in the content tool. That depends on which content tool is used (and sometimes multiple tools are used, each with different systems).
XLE uses the following standard coordinate bases (both right handed):
Object to world
- +X right
- +Y forward
- +Z up
View to world
- +X towards the right of the screen
- -Z into the screen
- +Y towards the top of the screen
Note that for both DirectX and OpenGL, we need to add a flip in the projection space in order to make NDC space left-handed.
###Rotations Rotation should be counterclockwise when looking down the axis of rotation (and should be expressed in radians). Euler angles functions should support multiple different ordering schemes.