Accessing depth buffer from a surface shader.

I’m having some issues with converting a fragment shader to a surface one. What the fragment one does, successfully, is check the difference between the depth buffer and the pixel’s depth and write another value. However when i try to do this same thing in a Surface shader the output is always 0.

Left is the surface, right is the fragment.


The relevant code for the fragment shader is:

uniform float _FoamStrength;
uniform sampler2D _CameraDepthTexture; //Depth Texture
uniform fixed4 _SpecularEdge, _SpecularDepth;

struct appdata {
    float4 vertex : POSITION;
    float3 normal : NORMAL;
};

struct v2f {
    float4 pos : SV_POSITION;
    float4 screenPos : TEXCOORD0;
};

v2f vert(appdata v)
{
    v2f o;
    o.pos = mul (UNITY_MATRIX_MVP, v.vertex);
    o.screenPos = ComputeScreenPos(o.pos);
    return o;
}

half4 frag( v2f i ) : SV_Target
{

    float sceneZ = LinearEyeDepth (tex2Dproj(_CameraDepthTexture, UNITY_PROJ_COORD(i.screenPos)).r);
    float objectZ = i.screenPos.z;
    float intensityFactor = 1 - saturate((sceneZ - objectZ) / _FoamStrength);  
    return lerp(_SpecularEdge, _SpecularDepth, intensityFactor);

}

The relevant code for the surface shader is:

        struct Input {
            float4 screenPos;
        };

        half _FoamStrength;
        fixed4 _SpecularDepth, _SpecularEdge;
        uniform sampler2D _CameraDepthTexture;
 

        void surf(Input IN, inout SurfaceOutputStandardSpecular o)
        {
            float sceneZ = LinearEyeDepth(tex2Dproj(_CameraDepthTexture, UNITY_PROJ_COORD(IN.screenPos)).r);
            float objectZ = IN.screenPos.z;
            float intensityFactor = 1 - saturate((sceneZ - objectZ) / _FoamStrength);
     
            fixed4 color = lerp(_SpecularEdge, _SpecularDepth, intensityFactor);

            o.Albedo = o.Specular = color;
        }

The depth values just arent right, so i suspect i am using them wrong. When i visualise the screenPosition its also a little different, the fragment is much brighter.

Bump, anyone? Maybe someone has a working example of a surface shaders with access to the depth buffer?

Try dividing screenPos.z by screenPos.w. Else, use a copy of the clip space position instead. (And again z/w.)

Changed the code to:

IN.screenPos.z /= IN.screenPos.w;
float sceneZ = LinearEyeDepth(tex2Dproj(_CameraDepthTexture, UNITY_PROJ_COORD(IN.screenPos)).r);

Now it just kinda flickers. I’m not sure how to get the clipspace position in a surface shader, normally i’d pass it through in a vert function but i dont think you can with Surface Shaders? Its not listed as a possible input either.

You can add a vert function with Surface Shader. See the normal extrusion surface shader example.

Yes, the thing i’m unclear about is how to access the Input struct as you’d do normally.

void vert (inout appdata_full v) {

It seems like you just have the appdata.

Ah, no, you can define your own input and output. appdata_full is just a common predefined struct. I can’t really find a good example, but you can change it like this:

struct my_struct {
    some_data : TEXCOORD6;
};

my_struct vert(appdata_full input) {
    my_struct output;
    // Initialize all values in the struct
    return output;
}

The problem is screenPos.z is not the same thing as the object depth relative to the linear depth texture, so you need to compute that on your own.

Shader "Custom/SurfaceDepthTexture" {
    Properties {
        _Color ("Color", Color) = (1,1,1,1)
        _MainTex ("Albedo (RGB)", 2D) = "white" {}
        _Glossiness ("Smoothness", Range(0,1)) = 0.5
        _Metallic ("Metallic", Range(0,1)) = 0.0
        _InvFade ("Soft Factor", Range(0.01,3.0)) = 1.0
    }
    SubShader {
        Tags { "Queue"="Transparent" "RenderType"="Transparent" }
        LOD 200
       
        CGPROGRAM
        // Physically based Standard lighting model, and enable shadows on all light types
        #pragma surface surf Standard vertex:vert alpha:fade nolightmap

        // Use shader model 3.0 target, to get nicer looking lighting
        #pragma target 3.0

        sampler2D _MainTex;

        struct Input {
            float2 uv_MainTex;
            float4 screenPos;
            float eyeDepth;
        };

        half _Glossiness;
        half _Metallic;
        fixed4 _Color;

        sampler2D_float _CameraDepthTexture;
        float4 _CameraDepthTexture_TexelSize;
       
        float _InvFade;

        void vert (inout appdata_full v, out Input o)
        {
            UNITY_INITIALIZE_OUTPUT(Input, o);
            COMPUTE_EYEDEPTH(o.eyeDepth);
        }

        void surf (Input IN, inout SurfaceOutputStandard o) {
            // Albedo comes from a texture tinted by color
            fixed4 c = tex2D (_MainTex, IN.uv_MainTex) * _Color;
            o.Albedo = c.rgb;
            // Metallic and smoothness come from slider variables
            o.Metallic = _Metallic;
            o.Smoothness = _Glossiness;

            float rawZ = SAMPLE_DEPTH_TEXTURE_PROJ(_CameraDepthTexture, UNITY_PROJ_COORD(IN.screenPos));
            float sceneZ = LinearEyeDepth(rawZ);
            float partZ = IN.eyeDepth;

            float fade = 1.0;
            if ( rawZ > 0.0 ) // Make sure the depth texture exists
                fade = saturate(_InvFade * (sceneZ - partZ));

            o.Alpha = c.a * fade;
        }
        ENDCG
    }
}
3 Likes

Thank you so much, great example on how to use Vert properly as well. The syntax is still a bit daunting, not sure what we can and cannot get away with.

One more question, in what case wouldnt we have a depth buffer texture?

The depth texture only exists in specific cases. One of the following must be true:

  • Your camera is rendering using the deferred rendering path, either enabled on the camera or from project settings.
  • Your camera has Camera.depthTextureMode with DepthTextureMode.Depth enabled; DepthNormals (or 5.4’s MotionVectors) don’t create a _CameraDepthTexture. Usually this gets enabled by a post process effect on the camera, but it can also be done via script. On a project I’m working on I force it on with a simple editor only script.
  • You have SoftParticles enabled in quality settings. I believe this will enable the depth texture for all cameras, though it might only be cameras with a particle system visible.
  • You have a realtime or mixed directional light in your scene with shadows enabled. The brightest shadowing directional light in the scene has it’s shadows rendered with a full screen using only the camera depth. Important note, this is only true on non-mobile platforms! It’s also possible for this to not be true if you have cascades disabled on PC or consoles as Unity may choose to not use these screen space shadows. I believe there’s also a hidden setting in 5.4 to disable this behavior and a future version of Unity will likely disable this entirely.

Only one of those needs to be true for a camera to render depth. If a camera has a culling mask so that it has no directional lights visible, or doesn’t have a post process on it, or is forced to use forward rendering, etc. it will not have a depth texture.

I also modified the originally posted shader because I realized that while testing _TexelSize should work to test for the existence of a depth texture, I forgot it does not since Unity doesn’t update the _TexelSize to reflect null textures. This one tests if the depth texture is exactly zero which should pretty much never happen in real world situations.

2 Likes

Thank you for this it was extremely useful.

I have on question though.

The resulting IN.eyeDepth seems to be sufficient enough for determing the depth of each fragment.

What does rawZ, sceneZ and the resulting math of sceneZ - partZ actually do?

Eye depth is exactly that, the world space depth of that fragment (the pixel being drawn by a particular model) from the camera. The camera depth texture, and the “raw z” is the value in the depth texture. The camera depth texture is either something generated by a separate pass of the scene geometry in forward rendering, or the depth that was rendered during the deferred pass. Depth texture stores the clip space depth, which is a non-linear 0.0 to 1.0 range. Research Z depth and depth buffers elsewhere if you want to understand that. The LinearEyeDepth() converts that non-linear 0.0 to 1.0 into a linear depth, i.e. the world space depth from the camera.

For opaque objects there’s not really a reason to do this as the linearized value from the depth texture should match the eye depth as calculated in the vertex shader, however for transparent objects (which aren’t rendered into the depth texture) it lets you get the distance from that fragment to the closest scene geometry behind it. That’s what sceneZ - partZ does, subtracts the depth that fragment is from the scene depth so the resulting value is the distance to the scene.

1 Like

Hello,

I see this topic is a bit old, but I am stuck with it :smile:
When I exactly copy your Code ( @bgolus ) I get the following warning: 3508925--280058--Unbenannt.PNG
At Line 103 I have:
float rawZ = SAMPLE_DEPTH_TEXTURE_PROJ(_CameraDepthTexture, UNITY_PROJ_COORD(IN.screenPos));

Someone know what I am missing here ?

Nothing really. Surface shaders sometimes passes IN.screenPos hardcoded as (0,0,0,0) even though it never should.

1 Like

Oh … okay it somehow does not show up everytime …
One last question, do you know where to find documentation about functions like SAMPLE_DEPTH_TEXTURE_PROJ ? I cant find anything about this method

It’s in the HLSLSupport.cginc file in Unity’s built in shader code. You can download it from here:
https://unity3d.com/get-unity/download/archive

This should just be used as reference, not copied into your project btw.

Really it’s just a macro to call tex2Dproj and return the red channel. The tex2Dproj function is one that takes a float4 uv, but is equivalent to calling tex2D(texture, uv.xy / uv.w)

1 Like

So how would this be done with opaque objects? This solution stops working as soon as alpha:fade is removed. My camera is rendering depth.

Looking into it further, I think this isn’t possible without command buffers because the _DepthTexture isn’t written to until after the opaque objects are drawn. I would need to draw my object after depth but before deferred lighting.

is it possible to write to the depth buffer inside a Surface Shader fragment (a-la out float depth : SV_Depth)?

Nope. Not without modifying the generated code.

1 Like