_CameraDepthTexture is empty

Hi,

I’m aware that my question has been answered many times but none of the solutions I found works.
I just want to retrieve the depth value of the camera (which is a hololens 1st gen in my case).
I implemented the following shader to do that :

Shader "Tutorial/Depth"{
    //show values to edit in inspector
    Properties{
        [HideInInspector] _MainTex("Texture", 2D) = "white" {}
    }

    SubShader{
        // markers that specify that we don't need culling
        // or comparing/writing to the depth buffer
        //Cull Off
        //ZWrite Off
        //ZTest Always

        Pass{
            CGPROGRAM
            //include useful shader functions
            #include "UnityCG.cginc"

            //define vertex and fragment shader
            #pragma vertex vert
            #pragma fragment frag

            //the rendered screen so far
            sampler2D _MainTex;

            //the depth texture
            sampler2D _CameraDepthTexture;


            //the object data that's put into the vertex shader
            struct appdata {
                float4 vertex : POSITION;
                float2 uv : TEXCOORD0;
            };

            //the data that's used to generate fragments and can be read by the fragment shader
            struct v2f {
                float4 position : SV_POSITION;
                float2 uv : TEXCOORD0;
            };

            //the vertex shader
            v2f vert(appdata v) {
                v2f o;
                //convert the vertex positions from object space to clip space so they can be rendered
                o.position = UnityObjectToClipPos(v.vertex);
                o.uv = ComputeScreenPos(o.position)
                return o;
            }


            //the fragment shader
            float4 frag(v2f i) : SV_TARGET{
                //get depth from depth texture
                float2 uv = i.uv.xy / i.uv.w;
                float depth = SAMPLE_DEPTH_TEXTURE(_CameraDepthTexture, uv);
                float linearDepth = Linear01Depth(depth);
                return linearDepth;
            }
            ENDCG
        }
    }
}

In addition, i’ve enable depth buffer in a script :

public Camera cam;

    void Awake()
    {
        cam.depthTextureMode = DepthTextureMode.Depth;
    }

But all the values I get are equals to 0 :

RenderTexture rt = new RenderTexture( resWidth, resHeight, resDepth, RenderTextureFormat.ARGBFloat);
            Graphics.Blit(depthMaterial.mainTexture, rt);
            RenderTexture.active = rt;
            depthTexture.ReadPixels(new Rect(0, 0, resWidth, resHeight), 0, 0);
            depthTexture.Apply();

Can anybody help me to figure out what’s my issue pls ?

Thx in advance

You are using the old standard renderpipeline?
You have assigned the camera to the field you’ve exposed (cam)?
Can you get the depth rendered to the screen with your shader?

The depth texture mode assignment looks correct to my eye.

EDIT1: Also, now that I took a closer look, is this actually functioning code (i.e. meaning it compiles?) as I see typos there. First, remove every syntactical error before trying to proceed, then if things don’t work, try to think what’s wrong.

EDIT2: You also have code where you divide uv.xy with non-existing w. Your input uv is only a two dimensional float.

So please try fix first your stuff that won’t compile.

Hi, thank you for your answer.

  • I don’t know which renderpipeline I’m using (if you mean rendering path, I’m currently using forward but I also tried with deferred).
  • I’m not sure I understand what you mean so tell me if I don’t answer you correctly. I specified the “cam” component as the main camera of my scene in the empty object containing my script.
  • No I can’t get anything from “_CameraDepthTexture”, I always have values equal to 0.

Yes, everything compile without errors. In addition, I removed the part where I divide uv.xy by uv.w but I still have the same behaviour.

During play mode, I can see in the inspector that the camera is rendering depth so I don’t understand why I have an empty texture ?
I know that the objects with rendered depth must have an opaque shader whith a render queue <= 2500 but it still doesn’t work …

Have you installed a scriptable renderpipeline renderer like LWPR/URP or HDRP and which one are you actually using? Or are you just using the standard, “old” renderer? It matters with these shaders and depth textures a lot.

And if you (for some reason) don’t see errors in your shaders, try it in another project. I’m sure it will NOT compile as you got basic syntax errors there, like missing semicolon.

Select your shader in Project view and then check inspector, and see that you get correctly compiled shader there, without errors.

I’m using the default render pipeline. I did try to use the LWRP but I have strange behaviour of the camera and the shaders and I don’t know why so I came back to the default render pipeline.
Is that why i have these issues ? If yes, could you advise me which pipeline to use and how to use them (if you know a good documentation page or tutorial, otherwise I’ll check myself) ?

Yeah sorry, you were right about the errors, I fixed them but still no improvement.

Can you tell a bit more what you are trying to accomplish, so that it would be easier to help you.

i.e. where do you need that depth from and where are you going to use it, and so on.
Right now I’m not sure where you try to use your RenderTexture code etc.

If you need a camera depth, you could just render straight to a DepthTexture from a camera. You can do that by setting a RenderTexture as the Target Texture.

Or are you looking to build some post-processing effect that utilizes depth. Just guessing here.
If that is the case, and you are using Post-Processing Stack v2, check the tutorial/info on how to create custom effects. It details pretty much every step needed to create Stack v2 effects.
https://docs.unity3d.com/Packages/com.unity.postprocessing@2.1/manual/Writing-Custom-Effects.html

HDRP/LWRP is completely different story if you need post effects.

I’m just trying to compute the distance of the projected pixel from the camera screen. I’m not using post processing effects.

Yes, I just want to render to a DepthTexture to get the values of each pixels and turn them into real distances.

What should I do to achieve that ?

did you get it to work in the end? I have a similar issue with a shader, it’s working in play mode in the editor but not on the device, as if there were no depth/normals value

3 Likes

Yes, answer by @bgolus from here

Your shader needs a shadowcaster pass. The easiest way to do that, as long as you’re not modifying the vertex positions or adding alpha testing, is to add a Fallback shader. For most things you want this just before the last } in your shader:
FallBack “Legacy Shaders/VertexLit”

So you need to add it to the shader where you are trying to use _CameraDepthTexture.

Also, make sure your camera is setup to use this mode:

_myCamera.depthTextureMode = DepthTextureMode.Depth;

Nowadays, unity will always render depth into an internal buffer, which you can read either via _CameraDepthTexture or _LastCameraDepthTexture.

To re-prepare them manually, use this shader before rendering any stuff that would need depth:
shader ‘Render-depth’

// Use this shader to populate depth map of your camera.
//
// _camera.enabled=false; //keep always disabled, will be RenderWithShader() manually.
// NOTICE: if all cameras are always disabled, unity editor-scene-camera will affect your depthmaps!!
// To prevent it, ensure you have at least one main camera that's active. Anyway, continue:
// _camera.depthTextureMode = DepthTextureMode.Depth;
// _camera.targetTexture = _myRenderTex_with32depthBits;  // 'new RenderTexture(512,512,32);'
// _camera.RenderWithShader(thisShader,"");

Shader "Unlit/Depth_SimpleShadowcaster"
{
    SubShader
    {
        Pass {
            Name "ShadowCaster"
            Tags { "RenderType"="Opaque" "LightMode" = "ShadowCaster" }
            Cull Off
            ZWrite On
            ZTest LEqual
            ColorMask 0

            CGPROGRAM

            #pragma vertex vert
            #pragma fragment frag
            #include "UnityCG.cginc"

            struct v2f {
                float4 pos : SV_POSITION;
                float2 depth : TEXCOORD0;
            };

            v2f vert (appdata_base v) {
                v2f o;
                o.pos = UnityObjectToClipPos(v.vertex);
                UNITY_TRANSFER_DEPTH(o.depth);
                return o;
            }

            float frag(v2f i) : SV_Target {
 
                //This macro doesn't return anything. Depth will be computed automatically by the native.
                // https://discussions.unity.com/t/743717/2
                UNITY_OUTPUT_DEPTH(i.depth);
            }
            ENDCG
        }
    }//end SubShader
}

after running this replacement shader, you will have _LastCameraDepthTexture accessible from shaders (that are used later within this same frame), given nothing else is overwriting it. …use FrameDebugger to check the order. And check script compilation order.
You can keep using it (in Graphics.Blit etc), until you overwrite it with _myOtherCameraWithDepthMode.Render()

_CameraDepthTexture will be empty during Graphics.Bit(myTexA, myTexB, myMaterial);. Because that texture is only available while rendering through a camera. For using it during Blit(), your blitting shader instead needs _LastCameraDepthTexture

Although not as performant, you could always dump _LastCameraDepthTexture into a custom black-and-white, texture. Just make sure it’s capacious enough, with R32_SFloat format:
shader ‘dump depth to preview-texture’

Shader "Custom/ZDepth_to_R_Texture" {

    Properties {
        _MinRange("Near Plane", Float) = 0
        _MaxRange("Far Plane", Float) = 1000
    }
    SubShader {
        Tags { "RenderType"="Opaque" }
        LOD 100

        Pass {
            CGPROGRAM
            #pragma vertex vert
            #pragma fragment frag
            #include "UnityCG.cginc"


            sampler2D _LastCameraDepthTexture;
            float _MinRange;
            float _MaxRange;


            struct appdata {
                float4 vertex : POSITION;
            };

            struct v2f {
                float4 screenPos : TEXCOORD0;
                float4 pos : SV_POSITION;
            };

            v2f vert (appdata v) {
                v2f o;
                o.pos = UnityObjectToClipPos(v.vertex);
                o.screenPos = ComputeScreenPos(o.pos);
                return o;
            }

            float inverseLerp(float a, float b, float t){
                return (t-a)/(b-a);
            }

            float frag(v2f i) : SV_Target {
                float depth = tex2D( _LastCameraDepthTexture,  i.screenPos/i.screenPos.w ).r;
                      depth = LinearEyeDepth(depth);
                      depth = saturate(inverseLerp(_MinRange, _MaxRange, depth));
                      depth = 1-depth;
                return depth;
            }
            ENDCG
        }
    }
}

This is useful if you want to visualize depth at any moment in your frame, to ensure it’s as you expect:
debug depth to Inspector-panel

//any script can invoke DepthDebugMGR.instance.showLastDepth_DEBUG()
//to render the most recently observed depth.
//You can then look at this texture from unity inspector panel.
#region debug the depth
#if UNITY_EDITOR
    Material _showLastDepthMat;
    public RenderTexture _lastDepth;
    float _latestDepthTime = -999;
 
    public void showLastDepth_DEBUG(){
        Debug.Assert(_latestDepthTime < Time.unscaledTime,
                      $"you should only invoke{nameof(showLastDepth_DEBUG)} once per frame");
        _latestDepthTime = Time.unscaledTime;

        Texture tex = Shader.GetGlobalTexture("_LastCameraDepthTexture");
        int width = tex?tex.width:512;
        int height = tex?tex.height:512;
        bool create  = _lastDepth == null;
             create |= _lastDepth!=null && tex!=null && _lastDepth.width != tex.width;
        if (create){
            if(_lastDepth!=null){ DestroyImmediate(_lastDepth); }
            _lastDepth = new RenderTexture(width, height, 0, GraphicsFormat.R32_SFloat);
        }
        //use the material to copy the 'LastCameraDepthTexture' into this _lastDepth RT.
        Graphics.Blit(null, _lastDepth, _showLastDepthMat);
    }
#endif
#endregion

If you prepared o.screenPos (see toy example below), remember that in fragment function you’ll need to divide its xy by w when you try to sample the depth map. Or use tex2Dproj() which will do it for you.
More on tex2Dproj here

float depth = tex2D(_CameraDepthTexture, i.screenPos.xy/i.screenPos.w).r;// Sample the depth texture via xy/w. Or use tex2Dproj(_CameraDepthTexture.xyww).r;

Depending on your depth texture format and what values it contains, you might need to further process the obtained float depth, by stuff like LinearEyeDepth, Linear01Depth …Or not use anything at all, if it contains values beyond [0,1] range. See docs here

Here is a toy example where I read the depth and squash it into visible [0 to 1] range:
toy example

struct v2f{
    float4 pos: SV_POSITION;
    float4 screenPos : TEXCOORD1;
};

v2f vert (appdata v) {
   v2f o;
   o.pos = UnityObjectToClipPos(v.vertex);
   o.screenPos = ComputeScreenPos(o.pos);
   return o;
}

fixed4 frag(v2f i) : SV_Target{
   const float NEARPLANE = _ProjectionParams.y;  //unity provides this constant. Camera's near plane.
   const float FARPLANE = _ProjectionParams.z; //Not needed, but I'll use it for an artistic effect of heightmap.
 
   float depth = LinearEyeDepth(tex2D(_CameraDepthTexture, i.screenPos.xy/i.screenPos.w).r);// Sample the depth texture via xy/w. Or use tex2Dproj(_CameraDepthTexture.xyww).r;
   float heightmap = (depth - NEARPLANE)/(FARPLANE - NEARPLANE);
   heightmap = 1-heightmap;//for heightmap (closer=whiter)
   return fixed4(heightmap.rrr,1);
}

Another important thing is:
If you intend to calculate the depth of current fragment (without any depthmap),
you need to divide its z coord by w:
float thisFragDepth = LinearEyeDepth(i.screenPos.z/i.screenPos.w);
Note that in this particular case, i.screenPos.z is distance from camera’s position, not from its near plane.

Remember that DirectX has differences to OpenGL in how it handles Projection matrix, and what will look “white vs dark” in a depth texture (nearer vs further, or other way around).
So if your shader seems to ignore ZTest LEqual or seems to have weird triangle sort order (or maybe screen is flipped upside down), chances are you need to check those platform differences: Unity - Manual: Writing shaders for different graphics APIs

And if you are doing something with your camera projection matrices (instead of relying on unity’s shader macros / functions), then check GL.GetGPUProjectionMatrix as well.