Unity’s Matrix4x4 struct uses the column-major format. The actual memory layout however is irrelevant. The struct has 16 float member variables:
m00 m01 m02 m03
m10 m11 m12 m13
m20 m21 m22 m23
m30 m31 m32 m33
They are stored in this order:
m00; m10; m20; m30; m01; m11; m21; m31; m02; m12; m22; m32; m03; m13; m23; m33;
| column 0 | column 1 | column 2 | column 3 |
As KazYarnof said in the comment there’s no advantage / disadvantage using column / row major layout. However it does affect the way you have to treat the matrix of course. As you might know a matrix multiplication only works when the column count of the first matrix matches the row count of the second. A Vector4 can be treated either as 1x4 (row vector) or as 4x1(column vector).
(1x4) * (4x4) --> (4x1)
(4x4) * (4x1) --> (1x4)
R = M * V // V treated as column vector
( V.x V.y V.z V.w )
| | | |
\|/ \|/ \|/ \|/
m00 m01 m02 m03 --> R.x
m10 m11 m12 m13 --> R.y
m20 m21 m22 m23 --> R.z
m30 m31 m32 m33 --> R.w
R = V * M // V treated as row vector.
// this isn't defined in Unity. A vector is always treated as a column vector
// this could be done in Unity by doing "M.transpose * V"
V.x --> m00 m01 m02 m03
V.y --> m10 m11 m12 m13
V.z --> m20 m21 m22 m23
V.w --> m30 m31 m32 m33
| | | |
\|/ \|/ \|/ \|/
( R.x R.y R.z R.w )
Don’t get confused by those “drawings”. I arranged the vectors in a way you can see how they are multiplied, not according to their actual layout (row / column).
The major difference between Unity and OpenGL is that Unity uses a left-handed coordinate system while OpenGL uses a right.handed system.
X-axis Y-axis z-axis
Unity left-to-right bottom-to-top near-to-far
OpenGL left-to-right bottom-to-top far-to-near
So “forward” in OpenGL is “-z”. In Unity forward is “+z”. Most hand-rules you might know from math are inverted in Unity. For example the cross product usually uses the right hand rule c = a x b where a is thumb, b is index finger and c is the middle finger. In Unity you would use the same logic, but with the left hand.
However this does not affect the projection matrix as Unity uses the OpenGL convention for the projection matrix. The required z-flipping is done by the cameras worldToCameraMatrix. So the projection matrix should look the same as in OpenGL.
By using my ProjectionMatrixEditorWindow you can view (get) and edit (set) the projection matrix of the main camera inside the editor. This is an editor script so just place it in a folder called “editor”. You can open the window via menu (Tools → ProjectionMatrixEditor).
As you can see the resulting matrix looks similar to the one you’ve posted. However in Unity L and R (as well as T and B) are always of equal size so the (R+L) term (as well as the (T+B) term) result in “0”.
Unity has an example script in the docs how to setup a custom off center perspective matrix.
I forgot to mention that since Unity can run on different platforms using different APIs (DirectX / OpenGL) the actual projection matrix representation inside the GPU might be different from the representation you use in Unity. However you don’t have to worry about that since Unity handles this automatically. The only case where it does matter when you directly pass a matrix from your code to a shader. In that case Unity offers the method GL.GetGPUProjectionMatrix which converts the given projection matrix into the right format used by the GPU.
So to sum up how the MVP matrix is composed:
- M = transform.localToWorld of the object
- V = camera.worldToCameraMatrix
- P = GL.GetGPUProjectionMatrix(camera.projectionMatrix)
- MVP = P * V * M
ps: If i messed something up, feel free to leave a comment. ^^