Camera Depth Texture sampling with 2018.3 and HDRP 4.X (mip map issue)

Hi,
I am using Unity 2018.3.0b12 with HDRP, and I want to sample the camera depth texture.
Here how I do it, it’s pretty regular:

UNITY_DECLARE_DEPTH_TEXTURE(_CameraDepthTexture);

// ...

float4 projPos = ComputeScreenPos(vertex_out);
projPos.z = -UnityObjectToViewPos(vertex_in).z;

// ...

return LinearEyeDepth(SAMPLE_DEPTH_TEXTURE_PROJ(_CameraDepthTexture, UNITY_PROJ_COORD(projPos))) * _ProjectionParams.w;

With the package HDRP 3.X (from 3.0.0 to 3.3.0), it used to work fine. Here is the normal rendering and the depth capture.
3969121--340624--Screenshot_19.png
3969121--340627--Screenshot_20.png

But with the HDRP package 4.X (from 4.0.0 to the latest 4.3.0), it’s completely broken. It looks like the _CameraDepthTexture stores mipmaps of the depth.
7546039--932476--Screenshot_21.png

What should I do to properly sample the depth map texture with HDRP 4.X, while still working with HDRP 3.X and the built-in render pipeline?

Thanks

1 Like

Hi,

We have change the depth texture to encode a full depth pyramid (so all mip are in the mip0 side by side). To correctly sample the depth buffer you should use LOAD_TEXTURE2D (with screen absolute coordinate) instead of SAMPLE.

In ShaderVariables.hlsl there is 2 helper function:

// Note: To sample camera depth in HDRP we provide these utils functions because the way we store the depth mips can change
// Currently it’s an atlas and it’s layout can be found at ComputePackedMipChainInfo in HDUtils.cs
float SampleCameraDepth(uint2 pixelCoords)
{
return LOAD_TEXTURE2D_LOD(_CameraDepthTexture, pixelCoords, 0).r;
}

float SampleCameraDepth(float2 uv)
{
return SampleCameraDepth(uint2(uv * _ScreenSize.xy));
}

Thanks for the reply. I played with it a bit and it seems to work.

But once I include the mandatory files to use the HDRP API (“Packages/com.unity.render-pipelines.core/ShaderLibrary/Common.hlsl”), it’s really complicated to adapt a legacy shader which used to call a lot of “UnityCG.cginc” functions…

I am making Unity Assets and aim to make shaders compatible with both legacy renderer and SRP. Until HDRP 3.3.0, it was fine because for my case, I didn’t have to include SRP or HDRP specific files. But with this new way of sampling the depth texture, I have to include “ShaderLibrary/Common.hlsl”, but it doesn’t look like it’s really possible to use both the SRP API and the good old “UnityCG.cginc”.

Is there any good practices about that, or are we supposed to have 2 completely separate shaders: one including “UnityCG.cginc” for the legacy pipeline, and one including the SRP API for HDRP/LWRP support?

Thanks

On a mostly related note, the Scene Depth shader graph node also wrongly samples the whole mipmap chain.

3 Likes

I’m using a very simple customRender derivation from the HDRP on second camera.
==> I have some fragments (As) overwriting other fragments (Bs) althought they (As) are farther away and hidden behind (Bs).
I tried different Ztest, nothing changes and anyway not all fragments are overwritten so it’s not something that’s happening in all cases there’s just a logic to it (the bigger the distance between As and Bs, the more it happens).
I looked at the _CameraDepthTexture and the depth of (As) is also overwriting the depth of (Bs)
This is not happening on the main HDRP camera

Anyway, in relation to this topic, I decided to compare the depth of the fragments to the depth texture. I did this previously in the legacy version (for water effects) and it worked well (just to say that it’s not the first time i’m doing this kind of stuff).
First, the _CameraDepthTexture is not an atlas of mipmap when you do a derivation, like with the HDRP main camera.
Second, the encoding of the depth there is very strange. I tried to reconstruct the fragment depth by lots of different means using
LinearEye
Linear01
“nothing”
on
-TransformWorldToView(o.vertex.xyz).z * _ProjectionParams.w;
LOAD_TEXTURE2D_LOD(_CameraDepthTexture, i.screenPos.xy, 0).r; (with o.screenPos = ComputeScreenPos(o.vertex))
SampleCameraDepth(i.screenPos.xy); (which is not working because it transforms my coordinates)
etc…

I can see that there’s depth values in the depth texture “at the good place” for all of my tests, but impossible to compare it with the fragment depth like I did before. The value ranges are completly different, even when I manage to have a 0…1 range by using unity shader functions.
Do someone know how the depth is encoded in the _CameraDepthTexture ?

EDIT : found out about my depth problem : forgot I put 0 for depth buffer in the render texture. I’ll see if it changes the fact that the depth texture values are strange. And anyway I need to compare depth for other effects.

** @SebLagarde ** any feedback on this one please?

1 Like

Up

According to the repository it has been fixed and the new release package will be out very soon:
https://github.com/Unity-Technologies/ScriptableRenderPipeline/blob/release/2018.3/com.unity.render-pipelines.high-definition/CHANGELOG.md

2 Likes

fixed in 4.8.0

1 Like

Can confirm that it now works as expected. Unfortunately we still need a way to linearise the depth values but this is actually being addressed right now (https://github.com/Unity-Technologies/ScriptableRenderPipeline/pull/2740) so hopefully we’ll get it in the next release package :slight_smile:

1 Like

I got the error : undeclared identifier ‘SampleCameraDepth’ when I call this method. I changed
CGPROGRAM/ENDCG to
HLSLPROGRAM/ENDHLSL and added

HLSLINCLUDE
    #include "UnityCG.cginc"
    #include "HLSLSupport.cginc"

    #include "Packages/com.unity.render-pipelines.core/ShaderLibrary/Common.hlsl"
    #include "Packages/com.unity.render-pipelines.core/ShaderLibrary/API/D3D11.hlsl"
    #include "Packages/com.unity.render-pipelines.core/ShaderLibrary/Macros.hlsl"
    #include "Packages/com.unity.render-pipelines.core/ShaderLibrary/UnityInstancing.hlsl"
    #include "Packages/com.unity.render-pipelines.high-definition/Runtime/ShaderLibrary/ShaderVariables.hlsl"   
    ENDHLSL

And in frag function

float depth = SampleCameraDepth(i.uv);

here is the error:

I can not figure it out. Any advice is a big help, thanks in advance.

After doing what did you receive the error?
If is after upgrading the package to 4.8, you must remove and reinstall the package.

@elettrozero thanks buddy, my hdrp version is 4.8.0, I removed it and add it back again , still have the same error.
Here is my shader code :

Shader "MJ/ForwardDecal"
{
    Properties
    {
        _MainTex ("Decal Texture", 2D) = "white" {}
    }

    HLSLINCLUDE
    #include "UnityCG.cginc"
    #include "HLSLSupport.cginc"

    #include "Packages/com.unity.render-pipelines.core/ShaderLibrary/Common.hlsl"
    #include "Packages/com.unity.render-pipelines.core/ShaderLibrary/API/D3D11.hlsl"
    #include "Packages/com.unity.render-pipelines.core/ShaderLibrary/Macros.hlsl"
    #include "Packages/com.unity.render-pipelines.core/ShaderLibrary/UnityInstancing.hlsl"
    #include "Packages/com.unity.render-pipelines.high-definition/Runtime/ShaderLibrary/ShaderVariables.hlsl"
    ENDHLSL

    CGINCLUDE
    #include "UnityCG.cginc"
    #include "HLSLSupport.cginc"
    ENDCG

    SubShader
    {
        Tags{ "Queue"="Geometry+1" }

        Pass
        {
            ZWrite Off
            Blend SrcAlpha OneMinusSrcAlpha

            HLSLPROGRAM
            #pragma target 3.0
            #pragma vertex vert
            #pragma fragment frag

            struct v2f
            {
                float4 pos : SV_POSITION;
                float4 screenUV : TEXCOORD0;
                float3 ray : TEXCOORD1;
            };
           
            v2f vert (appdata_base v)
            {
                v2f o;
                o.pos = UnityObjectToClipPos (v.vertex);
                o.screenUV = ComputeScreenPos (o.pos);
                o.ray = UnityObjectToViewPos(v.vertex).xyz * float3(-1,-1,1);
                return o;
            }

            sampler2D _MainTex;
            sampler2D _CameraDepthTexture;
            float4 frag(v2f i) : SV_Target
            {
                i.ray = i.ray * (_ProjectionParams.z / i.ray.z);
                float2 uv = i.screenUV.xy / i.screenUV.w;
               
                float depth = LOAD_TEXTURE2D(_CameraDepthTexture, uv).x;
               
                depth = Linear01Depth (depth);
               
                float4 vpos = float4(i.ray * depth,1);
                float3 wpos = mul (unity_CameraToWorld, vpos).xyz;
                float3 opos = mul (unity_WorldToObject, float4(wpos,1)).xyz;
                clip (float3(0.5,0.5,0.5) - abs(opos.xyz));
               
                float2 texUV = opos.xz + 0.5;

                float4 col = tex2D (_MainTex, texUV);
                return col;
            }
            ENDHLSL
        }
    }

    Fallback Off
}

What is the right way to get depth :rage:

May I suggest you to use Shader Graph and have a look here?

Feedback Wanted: Shader Graph page-37

@elettrozero Shader Graph can certainly use the right depth value, but I just wanna to write a custom shader in which use the right depth value to implement a decal effect. Still , thanks buddy, I should go on Googling.

Try with this method SHADERGRAPH_SAMPLE_SCENE_DEPTH passing screen position .xy / .w
I assume you’re on HDRP, therefore you cannot access the _CameraDepthTexture directly.

@elettrozero I just looked this function and use it in my shader code, it does’t work ,bug you do enlighten me ;), I searched
LOAD_TEXTURE2D and
LOAD_TEXTURE2D_LOD, found the right way to get depth, 4 ways can do this.

  1. float depth = LOAD_TEXTURE2D_LOD(_CameraDepthTexture, screenPos, 0).r;
  2. float depth = LOAD_TEXTURE2D(_CameraDepthTexture, screenPos);
  3. float depth = LOAD_TEXTURE2D_LOD(_DepthPyramidTexture, TexCoordStereoOffset(screenPos), 0).r;
  4. float depth = LOAD_TEXTURE2D(_DepthPyramidTexture, screenPos).r;

basicly use LOAD_TEXTURE2D, LOAD_TEXTURE2D_LOD and _CameraDepthTexture, _DepthPyramidTexture.
I found that _CameraDepthTexture and _DepthPyramidTexture seem to look the same, andf from the buffer name I guess that they are the same texture in fact.


They said they put the whole pyramid in the depth buffer texture and you should load the LOD you want but can you access one of the two variables in HDRP?

yes , before access these variables we should declare the texture as TEXTURE2D, like:
TEXTURE2D(_CameraDepthTexture);
TEXTURE2D(_DepthPyramidTexture);

not like in CG, which is
sampler2D _CameraDepthTexture
sampler2D _DepthPyramidTexture, this is the difference which got me stock.

1 Like

Won’t this cause a problem for high resolution? Hypothetically if I was rendering at 6 or 8 k the depth texture would exceed the 8k texture limit?