Convention of matrices passed to shaders

I’m seeing some funny results when passing matrices to shaders manually versus using the built-in shader variables. Unity stores matrices in column-major format, but I’m confused if they continue to be that way when used in the shader.

Let’s take _CameraToWorld for instance. When declared in the shader, Unity automatically sets it up. Does the convention of _CameraToWorld differ when using CG/HLSL v/s GLSL?

I wouldn’t think so.

I bring this up, because I only noticed today that Unity recommends using gameobject.renderer.localToWorldMatrix instead of gameobject.transform.localToWorldMatrix. I have no idea how my code even works!
Does this mean the resulting matrix (using renderer) will be row-major for HLSL/CG and column-major for GLSL?

While we’re at it, what’s with the weird documentation for Camera.localToWorldMatrix?

Note that camera space matches OpenGL convention: camera’s forward is the negative Z axis. This is different from Unity’s convention, where forward is the positive Z axis.

Screen space in DX has X going right, Y down and Z into the display, so its RHS (with origin at top left)

Screen space in GL, if I’m not mistaken, is X going right, Y up, and Z into display, so its LHS (with origin at bottom left).

Is Unity talking about some other convention here?

Matrix packing convention doesn’t differ between platforms in Unity. Now it’s been a few weeks since I messed with this so bear with me if I’m in error. It seems their builtin matrices are row-major but not transposed on the CPU, so literally the same matrix on CPU and GPU from code perspective if you used array indexing. Usually most shader examples and systems I’ve seen require transposition of the matrices on the CPU before passing to the GPU so that the matrix multiply works out one way in shader code, something like mul(matrix, vector). In Unity they don’t do the transpose, and they don’t declare the matrices in the shader column-major either, so the transformations must be done like mul(vector, matrix) for the desired result. It’s very confusing but actually pretty efficient and makes sense once you get used to it.

Camera.localToWorldMatrix is like it says: it uses OGL’s LHS system, with +Z coming OUT of the display, towards viewer, versus the RHS convention used in the Unity Editor and for world space, object space, etc.