Does UNITY_MATRIX_MV * UNITY_MATRIX_IT_MV = Identity?

Hi there,

I want to convert a vertex coordinate from model space, into World * View space, and then back.

I know that the first step is:
v.vertex = mul(UNITY_MATRIX_MV, v.vertex);

But is the second step this?
v.vertex = mul(UNITY_MATRIX_IT_MV, v.vertex);

So long as nothing happens in between these two steps the vertex should be exactly where it was in the beginning. That is, I’m assuming that UNITY_MATRIX_MV * UNITY_MATRIX_IT_MV = Identity.

I know that a matrix multiplied by its “inverse” = an identity matrix. But… What’s an “inverse transpose”? Is that the same thing as an inverse or something weird? Is any matrix multiplied by its inverse transpose = an identity matrix?

The reason I need to know this is because, when writing surface shaders, some sneaky internal process transforms the vertex by WorldViewProjection FOR you after applying your vertex shader. So if you perform the transformations yourself, (in order to modify the vertex in world, view or projection space perhaps) Unity will go right ahead and perform those same transformations again, doubling them up on you, leaving you with a mess of vertices and no indication as to what went wrong. Don’t you love how this isn’t documented anywhere?

Furthermore, when writing fragment shaders, the vertex shader convention is completely different - you have to do the transformations yourself. This is less convenient, but it gives you more control and prevents you from having to perform redundant forward-and-back matrix transformations.

A matrix multiplied by its inverse transpose is not the identity matrix.

Inverse transpose matrices are normally used for the transformations of normals if I rember correctly. I dont think its what your after in this case although Im not to sure what you are trying to achieve.

Just this transformation should put the vertex into view space.

v.vertex = mul(UNITY_MATRIX_MV, v.vertex);

I dont think there is a built in inverse of the model view matrix in unity. You will need to calculate it from a script like this.

Matrix4x4 InverseModelViewMatrix = Matrix4x4.Inverse(Camera.main.worldToCameraMatrix * transform.localToWorldMatrix);

You can then pass it in as a uniform.

Im pretty sure most of that is correct :slight_smile:

Good reference http://www.lighthouse3d.com/tutorials/glsl-tutorial/the-normal-matrix/

As others have pointed out, the inverse transpose of a matrix is not the same as the inverse. It is closer to the original matrix than it is to the inverse.

It may not be explicitly documented, but it should be perfectly expected that surface shaders perform the MVP multiplication automatically. The vert function is optional, and executed in addition to the operations you would normally expect from a shader.

That said, I agree that it is frustrating that vertex operations have to be carried out in object space. You appear to want to perform them in eye space. What is the effect you are trying to achieve? Could you do it in world space?

Thanks for the help.

I managed to achieve the effect I was looking for, though I had to make compromises. Largely my frustrations were with the lack of documentation and confusion over the built-in values:

  • _World2Object is NOT the inverse of _Object2World.

_Object2World functions exactly how you’d expect in transforming vertices into world space, but once there, they can’t be brought back with _World2Object. Instead, you have to supply that yourself with Transform.worldToLocalMatrix. What _World2Object actually does is NOT documented. In fact, the documentation outright lies to you: http://docs.unity3d.com/Documentation/Components/SL-BuiltinValues.html

  • Unity does NOT provide any inverse matrices.

While unity does supply the World, View and Projection transforms, and various combinations (such as MV, MV*P) it does not provide the inverse of any of them. Instead, you have to calculate them yourself, which should be an easier task than it is…

  • Matrix math in Unity is backwards.

With most (if not all) graphics pipelines, the correct order to multiply matrices together is Model * View * Projection. However, Unity got their matrix multiplication operators backwards, so in Unity you have to write Projection * View * Model. This is like saying 2 / 10 = 5.

  • Camera.projectionMatrix is wrong.

The matrix returned by Camera.projectionMatrix is not necessarily the matrix actually used to transform the vertices in the rendering pipeline. It is correct for OpenGL but D3D does something completely different, as shown here: How do I reproduce the MVP matrix? - Questions & Answers - Unity Discussions

  • Undocumented built-in values.

There exists built-in values provided to shaders which are not documented anywhere. For example, _CameraToWorld. Whether this particular variable actually works is an open question: http://forum.unity3d.com/threads/172355-_CameraToWorld-broken

The effect I was actually trying to achieve was to reduce Z-fighting using depth bias. Outlined below is my adventure in trying to accomplish this:

  • Using the “Offset” shaderlab value.

First I tried to fix the objects afflicted with Z-fighting artifacts by adding an Offset in the shader that drew these objects. Even after applying a significant offset values, this did not help with Z-fighting at all. Since all of the objects were offset by the same amount, their depth values were still neck and neck.
The problem here was that the depth offset was applied for BOTH Z-testing AND Z-writing. If it only wrote to the Z-buffer with a bias, and tested normally, then the Z-fighting caused by co-planar polygons would be resolved completely, as rendering order would resolve depth occlusions where you’d normally experience Z-fighting.

  • Using two passes, where the first pass writes to the Z-Buffer with a bias.

I wrote a first pass, with Z-writing enabled, but this pass had a small bias. The second pass wouldn’t Z-write, but would test it’s depth against the biased depth values written by the previous pass. This achieved basically the exact result I was looking for.
Though there was one obvious problem - it required two passes. Twice the geometry. Not really a good solution at all.

  • Using a property “_Offset” as an argument for Offset, with each object having its own _Offset value.

Basically, this required sorting the objects by distance from the camera, and then giving a unique offset value to each object based on the sort. Obviously not ideal when working with many objects as the offset will gradually become huge, but it’d still work.
However, offset does not allow properties as arguments. Go figure.

  • Translating and Scaling each objects’ Transform away from the camera, to achieve the same effect as Offset.

​Using the same sorting as before, I tried scaling and positioning the objects further out in the OnWillRenderObject() phase. This would affect the Z position of each object without affecting it’s appearance in the XY plane. This worked, however, introduced many complexities, especially when the objects were moved around by other scripts.

  • Scaling the Camera’s near and far clip planes before rendering each object.

In OnWillRenderObject(), I’d scale the current camera’s near and far plane distance by some amount. This would cause the object to have an offset depth after being transformed by the projection matrix.
However, Unity doesn’t allow per-object camera settings. As a result, the z-planes would just jump around the same for every object. Again, go figure.

  • Passing the modified projection matrix as a shader variable.

​The idea here was to perform the vertex transformations myself, but to use the projection matrix I provided. However, as noted above, Unity goes right ahead and does the MVP transformations for you anyway, even if you’ve done them yourself. This is why surface shaders SHOULD NOT TOUCH THE VERTICES IF YOU HAVE WRITTEN YOUR OWN VERTEX SHADER, as how it is with fragment shaders.
Honestly, you don’t know the hours of confusion I had to suffer trying to figure out what the hell was going on behind the scenes. I’m not stupid, Unity, you don’t have to save me the inconvenience of transforming the vertices as I am completely capable of doing this myself.
This is not the only example of this, and I’ve seen plenty other cases where other developers fell victim to this awful trap, especially since it completely violates your expectations after viewing the fragment shader examples.

  • Transforming the vertices into projection space, applying the offset manually, then transforming them back, so that Unity can go ahead and transform them a second time.

Please kill me now.
Simply trying to get the correct matrix to pass in, in order to actually perform the correct transformations, was enough of a migraine. See the above reasons.
After I somehow managed to pass in the correct matrices, I tried offsetting vertex.z by some amount. I don’t really understand what the z and w components mathematically correspond to so I doubted I’d get the desired effect from this approach. The results didn’t line up with what I achieved earlier using different approaches.

  • Transforming the vertices into world space, scaling each vertex position away from the _WorldSpaceCameraPos, then transforming back.

Sounds easy enough? Wrong.
I used _Object2World to position the vertices in world space. All good.
I then used _World2Object to position them back after making the changes.
Then it all went wrong. _World2Object is NOT the inverse of _Object2World, no matter how much the documentation might say it is. Try it yourself.
Supplying in Transform.worldToLocalMatrix as a shader variable did the trick though.

So yeah, basically, what seemed like a simple solution to a complex problem quickly downspiralled into a headache inducing, complicated mess of depth sorting and vertex shader hacks to work around traps, strange API limitations and design choices, and poorly written documentation.

Can’t wait for ShaderLab v2.0.

1 Like

I noticed some of those myself, but I had no idea about the rest, thanks!

Just one thing:[quote=“Zergling103, post:5, topic: 514159, username:Zergling103”]
This is why surface shaders, as with fragment shaders, SHOULD NOT TOUCH THE VERTICES IF YOU HAVE WRITTEN YOUR OWN VERTEX SHADER.
[/quote]
I always had to transform the vertices myself in my own vert / frag shaders. Unity does not touch these at all. I’m not sure about surface vertex shaders though… I don’t use surface shaders. Are you sure something like this wouldn’t work though?

 o.pos = mul(UNITY_MATRIX_MVP, v.vertex);
 o.pos.z *= offset;

If that doesn’t work, try o.pos.z *= offset * o.pos.w because… magic.

No, with surface shaders, it does do the transformations for you.

However, the transformations occur AFTER your vertex shader is applied, meaning you’re strictly working in model space, and you do not have access to the vertex position after being transformed into world, view, nor projection space.

Hmmm… one more reason not to use surface shaders…

By the way… I think I know you… or at least your art :wink: The world is such a small place.

Hmm, well, you’d want to use Surface Shaders if you want your shader to work with the lighting pipeline. Otherwise, its better to use Fragment shaders as this bypasses that extra processing.

Also, I don’t recognize the name Dolkar, but it might be better to discuss that via PM as to not dilute the topic. ^^;

I made my own cgincludes I import to make my own vert/frag shaders to support lighting… They are pretty clumsy and not properly tested nor documented, so I’m not really keen on releasing them.

You probably don’t know me…

Anyways… this topic should be sticked or something! Quite useful gotcha’s there…

You are correct in that multiplication occurs that way but it’s not really backwards. It’s row order vs column order multiplication. The way vectors and matrices are laid out determine how you multiply them together. Different conventions.

Also, Camera.projectionMatrix seems to work for me. In the link you referenced it looks like he is using that in his answer.

I know this thread is long dead but I came across it on my hunt for the ever-elusive shader documentation and I might be able to better answer the original question for other people who come here looking for answers.

I was in the same predicament as you. i.e. I needed to transform vertices into view space for modification and then transform them back to model space so that a surface shader could proceed. The inversion would not be necessary in a Vert/Frag shader but is for a surface shader.

@Zergling103 's long rant/post was very informative but unfortunately they were so close to the right answer in the original post…

In short, the order of operations matter when dealing with matrices. A x B != B x A. That said, The transpose of (A x B) = B x A.

for your original example you can go from model space to view space and back by doing the following:

// Transform Model Space to View Space
v.vertex = mul(UNITY_MATRIX_MV, v.vertex);
// Transform View Space back to Model Space
v.vertex = mul(viewPos, UNITY_MATRIX_IT_MV);

as you can see, the second operation is the reverse of what you proposed. Hence, the inverse Matrix was there the whole time. :slight_smile:

Hope this helps anyone who finds themselves in the same position.

1 Like