Extracting depth value from camera projection matrix

For fun I am writing a software rasterizer. I am projecting a point through the main camera matrices:

Matrix4x4 V = Camera.main.worldToCameraMatrix;
Matrix4x4 P = Camera.main.projectionMatrix;

and then MVP.MultiplyPoint

I can normalize the screen positions just fine, but I have some problems getting the z value right.
Straight from the MultiplyPoint method the values range from: +1.0f when the camera is about FarClippingDistance away and exactly -1.0f when NearClipping away. Which is what it should I guess.

Now my question, how can I normalize this value to get exactly the same number as in a shaders COMPUTE_EYEDEPTH? Or am I confusing some terms here?

Edit: just looked into the UnityCG

#define COMPUTE_EYEDEPTH(o) o = -UnityObjectToViewPos( v.vertex ).z

This doesn’t take into consideration the projection matrix right? That’s why it’s calculating a “different depth” than my software rasterizer?

Edit: I just can’t get this to work, here is my code:

Vector4 projectedPoint = MVP.MultiplyPoint(coord);
projectedPoint = new Vector3(projectedPoint.x + 1f, projectedPoint.y + 1f, projectedPoint.z + 1f) / 2f;
float depth = projectedPoint.z; // this works perfectly

Then in the shader:

    v2f vert(appdata_base v) {
        v2f o;
        float4 position = UnityObjectToClipPos(v.vertex);
        o.pos = position;
        float depth = (1+position.z)/2;
        o.color = depth;

        return o;
    }

When login the values and comparing the pixel color value, these are completely off.

My question boils down to:

What is the z value range of UnityObjectToClipPos and what does it represnt? Because it sure doesn’t represent the same matrix-based value as in software. It kinda works when i multiply it by 10, but what gives?

The eye depth is the view depth. It’s depth in game world units and does not use the projection matrix.

This should give you the same depth value as COMPUTE_EYEDEPTH for a given position.
-Camera.main.worldToCameraMatrix.MultiplyPoint3x4(targetTransform.position).z

UnityObjectToClipPos is going to produce something more similar to the MVP matrix you’re using, but you cannot use Camera.projectionMatrix directly if you want it to match. You need to use GL.GetGPUProjectionMatrix to transform the projection matrix from the camera to match the one the shader uses.
https://docs.unity3d.com/ScriptReference/Camera-projectionMatrix.html
https://docs.unity3d.com/ScriptReference/GL.GetGPUProjectionMatrix.html

Additionally if you’re trying to render a mesh, you need to use the Renderer.localToWorldMatrix rather than the transform matrix as there may be additional scaling and rotation being applied as part of the import settings.
https://docs.unity3d.com/ScriptReference/Renderer-localToWorldMatrix.html

Note that clip space / projection space depth is non-linear, and is in a 0 to w range (or w to 0 if using reversed Z depth). By “w” I mean the w component of the float4 value that UnityObjectToClipPos produces. You want to do that divide in the fragment shader, not the vertex shader. Alternatively you can use the VPOS z in the fragment shader which is the clip space z divided by w.

1 Like