Thanks for the help.
I managed to achieve the effect I was looking for, though I had to make compromises. Largely my frustrations were with the lack of documentation and confusion over the built-in values:
- _World2Object is NOT the inverse of _Object2World.
_Object2World functions exactly how you’d expect in transforming vertices into world space, but once there, they can’t be brought back with _World2Object. Instead, you have to supply that yourself with Transform.worldToLocalMatrix. What _World2Object actually does is NOT documented. In fact, the documentation outright lies to you: http://docs.unity3d.com/Documentation/Components/SL-BuiltinValues.html
- Unity does NOT provide any inverse matrices.
While unity does supply the World, View and Projection transforms, and various combinations (such as MV, MV*P) it does not provide the inverse of any of them. Instead, you have to calculate them yourself, which should be an easier task than it is…
- Matrix math in Unity is backwards.
With most (if not all) graphics pipelines, the correct order to multiply matrices together is Model * View * Projection. However, Unity got their matrix multiplication operators backwards, so in Unity you have to write Projection * View * Model. This is like saying 2 / 10 = 5.
- Camera.projectionMatrix is wrong.
The matrix returned by Camera.projectionMatrix is not necessarily the matrix actually used to transform the vertices in the rendering pipeline. It is correct for OpenGL but D3D does something completely different, as shown here: How do I reproduce the MVP matrix? - Questions & Answers - Unity Discussions
- Undocumented built-in values.
There exists built-in values provided to shaders which are not documented anywhere. For example, _CameraToWorld. Whether this particular variable actually works is an open question: http://forum.unity3d.com/threads/172355-_CameraToWorld-broken
The effect I was actually trying to achieve was to reduce Z-fighting using depth bias. Outlined below is my adventure in trying to accomplish this:
- Using the “Offset” shaderlab value.
First I tried to fix the objects afflicted with Z-fighting artifacts by adding an Offset in the shader that drew these objects. Even after applying a significant offset values, this did not help with Z-fighting at all. Since all of the objects were offset by the same amount, their depth values were still neck and neck.
The problem here was that the depth offset was applied for BOTH Z-testing AND Z-writing. If it only wrote to the Z-buffer with a bias, and tested normally, then the Z-fighting caused by co-planar polygons would be resolved completely, as rendering order would resolve depth occlusions where you’d normally experience Z-fighting.
- Using two passes, where the first pass writes to the Z-Buffer with a bias.
I wrote a first pass, with Z-writing enabled, but this pass had a small bias. The second pass wouldn’t Z-write, but would test it’s depth against the biased depth values written by the previous pass. This achieved basically the exact result I was looking for.
Though there was one obvious problem - it required two passes. Twice the geometry. Not really a good solution at all.
- Using a property “_Offset” as an argument for Offset, with each object having its own _Offset value.
Basically, this required sorting the objects by distance from the camera, and then giving a unique offset value to each object based on the sort. Obviously not ideal when working with many objects as the offset will gradually become huge, but it’d still work.
However, offset does not allow properties as arguments. Go figure.
- Translating and Scaling each objects’ Transform away from the camera, to achieve the same effect as Offset.
Using the same sorting as before, I tried scaling and positioning the objects further out in the OnWillRenderObject() phase. This would affect the Z position of each object without affecting it’s appearance in the XY plane. This worked, however, introduced many complexities, especially when the objects were moved around by other scripts.
- Scaling the Camera’s near and far clip planes before rendering each object.
In OnWillRenderObject(), I’d scale the current camera’s near and far plane distance by some amount. This would cause the object to have an offset depth after being transformed by the projection matrix.
However, Unity doesn’t allow per-object camera settings. As a result, the z-planes would just jump around the same for every object. Again, go figure.
- Passing the modified projection matrix as a shader variable.
The idea here was to perform the vertex transformations myself, but to use the projection matrix I provided. However, as noted above, Unity goes right ahead and does the MVP transformations for you anyway, even if you’ve done them yourself. This is why surface shaders SHOULD NOT TOUCH THE VERTICES IF YOU HAVE WRITTEN YOUR OWN VERTEX SHADER, as how it is with fragment shaders.
Honestly, you don’t know the hours of confusion I had to suffer trying to figure out what the hell was going on behind the scenes. I’m not stupid, Unity, you don’t have to save me the inconvenience of transforming the vertices as I am completely capable of doing this myself.
This is not the only example of this, and I’ve seen plenty other cases where other developers fell victim to this awful trap, especially since it completely violates your expectations after viewing the fragment shader examples.
- Transforming the vertices into projection space, applying the offset manually, then transforming them back, so that Unity can go ahead and transform them a second time.
Please kill me now.
Simply trying to get the correct matrix to pass in, in order to actually perform the correct transformations, was enough of a migraine. See the above reasons.
After I somehow managed to pass in the correct matrices, I tried offsetting vertex.z by some amount. I don’t really understand what the z and w components mathematically correspond to so I doubted I’d get the desired effect from this approach. The results didn’t line up with what I achieved earlier using different approaches.
- Transforming the vertices into world space, scaling each vertex position away from the _WorldSpaceCameraPos, then transforming back.
Sounds easy enough? Wrong.
I used _Object2World to position the vertices in world space. All good.
I then used _World2Object to position them back after making the changes.
Then it all went wrong. _World2Object is NOT the inverse of _Object2World, no matter how much the documentation might say it is. Try it yourself.
Supplying in Transform.worldToLocalMatrix as a shader variable did the trick though.
So yeah, basically, what seemed like a simple solution to a complex problem quickly downspiralled into a headache inducing, complicated mess of depth sorting and vertex shader hacks to work around traps, strange API limitations and design choices, and poorly written documentation.
Can’t wait for ShaderLab v2.0.