I’m seeing some funny results when passing matrices to shaders manually versus using the built-in shader variables. Unity stores matrices in column-major format, but I’m confused if they continue to be that way when used in the shader.
Let’s take _CameraToWorld
for instance. When declared in the shader, Unity automatically sets it up. Does the convention of _CameraToWorld
differ when using CG/HLSL v/s GLSL?
I wouldn’t think so.
I bring this up, because I only noticed today that Unity recommends using gameobject.renderer.localToWorldMatrix
instead of gameobject.transform.localToWorldMatrix
. I have no idea how my code even works!
Does this mean the resulting matrix (using renderer
) will be row-major for HLSL/CG and column-major for GLSL?
While we’re at it, what’s with the weird documentation for Camera.localToWorldMatrix
?
Note that camera space matches OpenGL convention: camera’s forward is the negative Z axis. This is different from Unity’s convention, where forward is the positive Z axis.
Screen space in DX has X going right, Y down and Z into the display, so its RHS (with origin at top left)
Screen space in GL, if I’m not mistaken, is X going right, Y up, and Z into display, so its LHS (with origin at bottom left).
Is Unity talking about some other convention here?