[built-in] SV_Depth not working with mirrors

I have a shader that offsets pixel by a value using SV_Depth. This has been working great.

However, I just discovered this shader does not work properly with mirror reflections, and I can’t figure out why.

Here’s the shader in action (working):

The sphere is the object with the shader applied to it. Here, I’m simply affecting the depth offset value for demonstration.

Here are the objects when reflected in a mirror:

As you can see, the sphere is being rendered as if behind, even though it’s in front of the cube.

Here’s the depth offset shader:

Shader "Unlit/Frag_TestDepth"
{
    Properties
    {
        _MainTex ("Texture", 2D) = "white" {}
        _DepthOffset("DepthOffset", Float) = 0
    }
    SubShader
    {
        Tags { "RenderType"="Opaque" }
        LOD 100

        Pass
        {
            CGPROGRAM
            #pragma vertex vert
            #pragma fragment frag

            #include "UnityCG.cginc"

            struct appdata
            {
                float4 vertex : POSITION;
                float2 uv : TEXCOORD0;
            };

            struct v2f
            {
                float2 uv : TEXCOORD0;
                float4 vertex : SV_POSITION;
                float eyeDepth : TEXCOORD1;
            };

            sampler2D _MainTex;
            float4 _MainTex_ST;
            float _DepthOffset;

            inline float LinearEyeDepthToOutDepth(float z)
            {
                return (1 - _ZBufferParams.w * z) / (_ZBufferParams.z * z);
            }

            v2f vert (appdata v)
            {
                v2f o;
                o.vertex = UnityObjectToClipPos(v.vertex);
                o.uv = TRANSFORM_TEX(v.uv, _MainTex);
                o.eyeDepth = -UnityObjectToViewPos(v.vertex.xyz).z;
                return o;
            }

            //fixed4 frag (v2f i) : SV_Target
            fixed4 frag (v2f i, out float pixelOffsetDepth : SV_Depth) : SV_Target
            {
                float z = (i.eyeDepth + _DepthOffset);

                float localLinearEyeDepthToOutDepth = LinearEyeDepthToOutDepth(z);
                pixelOffsetDepth = localLinearEyeDepthToOutDepth;

                return saturate(float4(localLinearEyeDepthToOutDepth.xxx, 1));
            }
            ENDCG
        }
    }
}

Fairly straight forward.

The weird thing is - it doesn’t seem to matter what value is written to SV_Depth. As soon as that value is being written, the mirror reflection won’t work properly. If I don’t write SV_Depth (for example changing “out float” to “in float”), the mirror will work as expected (but naturally the depth offset function is lost).

The mirror script is taken from this:
http://wiki.unity3d.com/index.php/MirrorReflection4

The render texture (for the mirror) that these objects are being rendered into does have a depth buffer, so I’m not quite sure what’s going on.

So the question is - why doesn’t writing to SV_Depth work for the mirror camera?

Any pointer would be helpful, because I’ve tried everything I can think of!

Revisiting this topic. Really would like to solve this but out of ideas.

I attached a small sample project where you can see the problem.

Sorry for summoning you so rudely @bgolus , but you’ve always been of such great help in the past!

6836303–795293–mirror.unitypackage (32.6 KB)

The _ZBufferParams values likely won’t be valid for any camera with a custom projection matrix, like any mirror camera is going to use. You’ll want to use a system that passes the view space position of the vertices to the fragment, offset the z, and then apply the current projection matrix to that to get the appropriate depth value.

1 Like