Help with Fixing Camera View Matrix created with Matrix4x4.TRS. Resolves by making scale negative? (SOLVED)

I’ve been working a lot with rendering, command buffers, and setting projection / view matrices these past few days and it seems I’ve run into a really strange issue.

I created what I thought was a standard view matrix using Matrix4x4.TRS and Quaternion.LookAt and it somehow flips all the axes? What’s particularly jarring is that triangles further away get rendered on top of closer triangles. It turns any mesh visible inside out.

Here is my code that gives the problem :

Vector3 dir = Random.onUnitSphere;
Vector3 tan = Vector3.ProjectOnPlane(Random.onUnitSphere, dir).normalized;

Vector3 position = meshEnvironment.position + 3*dir;
Quaternion orientation = Quaternion.LookRotation(dir,tan);
Vector3 scale = Vector3.one;
float width = 2f;

Matrix4x4 viewMatrix = Matrix4x4.TRS(
		position,
		orientation,
		scale
	).inverse;

Matrix4x4 projMatrix = Matrix4x4.Ortho(-width / 2, width / 2, -width / 2, width / 2, 0.03f, 1000f);

Camera.main.worldToCameraMatrix = viewMatrix;
Camera.main.projectionMatrix = projMatrix;


The above image is what results. The arrow is supposed to be pointing at the surface oriented to the normal. The light should be coming from above which tells me the axes have been flipped. Both the arrow and the rock mesh are inside out, the arrow is actually behind the rock in world space relative to the camera yet it’s being rendered on top.

What I did to fix the issue was make the scale negative and then correspond that with changing the sign of the position offset direction. The code that surprisingly “worked” :

Vector3 position = meshEnvironment.position - 3*dir;
Quaternion orientation = Quaternion.LookRotation(dir,tan);
Vector3 scale = -Vector3.one;


The above image is what resulted from the negative scale. The arrow now points towards the surface oriented to the correct normal, a shadow is casted, the axes are correctly oriented in the image, there is a correct draw order.

But I have no idea why this worked. In my mind this should not have worked. Since I’m working a lot with cameras, rendering, graphics.drawmesh, command buffers, it’s really unsettling to have this negative scale lying around because I might run into errors it causes further down the line. I must be doing something wrong.

Any help would be appreciated!

I think I’ve solved the mystery.

For some odd reason it seems there might be a difference in the handedness of view matrices vs transform matrices.

For a standard camera, camera.worldToCameraMatrix (view matrix) and camera.transform.worldToLocalMatrix are equivalent EXCEPT for the 2nd row! The 2nd row’s of each are sign flips of one another!

camera.worldToCameraMatrix.GetRow(2) = -camera.transform.worldToLocalMatrix.GetRow(2)

I was first thinking this was a strange Matrix4x4.TRS or a Quaternion.LookAt problem, but they both work and orient transforms the way you’d think.

For some reason I think unity uses a left-handed system when dealing with coordinates and a right-handed system when dealing with view-matrices. If you have (right,up,forward) in a transform, you’ll have (-right,up,forward) in a view-matrix.

This simple code fixes the view matrix right up

Matrix4x4 viewMatrix = Matrix4x4.TRS(
		position,
		orientation,
		scale
	).inverse;
viewMatrix.SetRow(2, -viewMatrix.GetRow(2));

When I wanted to get transform orientation I just did the following

Matrix4x4 cameraToWorld = tmh.ViewMatrix.inverse;
Vector3 viewPos = cameraToWorld.GetPosition();

Vector3 worldU = cameraToWorld * Vector3.right * 0.5f;
Vector3 worldV = cameraToWorld * Vector3.up * 0.5f;
Quaternion shovel_rotation = Quaternion.LookRotation(worldU, worldV);
Gizmos.DrawWireMesh(
		gizmoShovel,
		0,
		viewPos,
		shovel_rotation * Quaternion.Euler(0, 0, -90f),
		Vector3.one * 0.5f
	);


Not only is the camera view-matrix correct, but the shovel & square gizmo are oriented properly as well.

.
.
.
[EDIT]
Also! Side note! If you’re using Shaders and Command Buffers and rendering and whatnot, make sure to use command_buffer.SetViewProjectionMatrices(viewMatrix, projectionMatrix). Not only that! If you are using non-trivial camera matrices and rendertexture dimensions that are different than your screen, you may want to manually make the calculations for transformations like screen space. For example, in the shader graph the screen space node as well as the screen space options for vertex positions were quite buggy and inconsistent. I replaced them with manually transforming vertex world space positions via the viewMatrix and projectionMatrix. It fixed a lot of very minor but strange and inconsistent shader artifacts. For example, here’s how I obtained NDC (screen space) coordinates :