With the package HDRP 3.X (from 3.0.0 to 3.3.0), it used to work fine. Here is the normal rendering and the depth capture.
But with the HDRP package 4.X (from 4.0.0 to the latest 4.3.0), it’s completely broken. It looks like the _CameraDepthTexture stores mipmaps of the depth.
What should I do to properly sample the depth map texture with HDRP 4.X, while still working with HDRP 3.X and the built-in render pipeline?
We have change the depth texture to encode a full depth pyramid (so all mip are in the mip0 side by side). To correctly sample the depth buffer you should use LOAD_TEXTURE2D (with screen absolute coordinate) instead of SAMPLE.
In ShaderVariables.hlsl there is 2 helper function:
// Note: To sample camera depth in HDRP we provide these utils functions because the way we store the depth mips can change
// Currently it’s an atlas and it’s layout can be found at ComputePackedMipChainInfo in HDUtils.cs
float SampleCameraDepth(uint2 pixelCoords)
{
return LOAD_TEXTURE2D_LOD(_CameraDepthTexture, pixelCoords, 0).r;
}
Thanks for the reply. I played with it a bit and it seems to work.
But once I include the mandatory files to use the HDRP API (“Packages/com.unity.render-pipelines.core/ShaderLibrary/Common.hlsl”), it’s really complicated to adapt a legacy shader which used to call a lot of “UnityCG.cginc” functions…
I am making Unity Assets and aim to make shaders compatible with both legacy renderer and SRP. Until HDRP 3.3.0, it was fine because for my case, I didn’t have to include SRP or HDRP specific files. But with this new way of sampling the depth texture, I have to include “ShaderLibrary/Common.hlsl”, but it doesn’t look like it’s really possible to use both the SRP API and the good old “UnityCG.cginc”.
Is there any good practices about that, or are we supposed to have 2 completely separate shaders: one including “UnityCG.cginc” for the legacy pipeline, and one including the SRP API for HDRP/LWRP support?
I’m using a very simple customRender derivation from the HDRP on second camera.
==> I have some fragments (As) overwriting other fragments (Bs) althought they (As) are farther away and hidden behind (Bs).
I tried different Ztest, nothing changes and anyway not all fragments are overwritten so it’s not something that’s happening in all cases there’s just a logic to it (the bigger the distance between As and Bs, the more it happens).
I looked at the _CameraDepthTexture and the depth of (As) is also overwriting the depth of (Bs)
This is not happening on the main HDRP camera
Anyway, in relation to this topic, I decided to compare the depth of the fragments to the depth texture. I did this previously in the legacy version (for water effects) and it worked well (just to say that it’s not the first time i’m doing this kind of stuff).
First, the _CameraDepthTexture is not an atlas of mipmap when you do a derivation, like with the HDRP main camera.
Second, the encoding of the depth there is very strange. I tried to reconstruct the fragment depth by lots of different means using
LinearEye
Linear01
“nothing”
on
-TransformWorldToView(o.vertex.xyz).z * _ProjectionParams.w;
LOAD_TEXTURE2D_LOD(_CameraDepthTexture, i.screenPos.xy, 0).r; (with o.screenPos = ComputeScreenPos(o.vertex))
SampleCameraDepth(i.screenPos.xy); (which is not working because it transforms my coordinates)
etc…
I can see that there’s depth values in the depth texture “at the good place” for all of my tests, but impossible to compare it with the fragment depth like I did before. The value ranges are completly different, even when I manage to have a 0…1 range by using unity shader functions.
Do someone know how the depth is encoded in the _CameraDepthTexture ?
EDIT : found out about my depth problem : forgot I put 0 for depth buffer in the render texture. I’ll see if it changes the fact that the depth texture values are strange. And anyway I need to compare depth for other effects.
@elettrozero Shader Graph can certainly use the right depth value, but I just wanna to write a custom shader in which use the right depth value to implement a decal effect. Still , thanks buddy, I should go on Googling.
Try with this method SHADERGRAPH_SAMPLE_SCENE_DEPTH passing screen position .xy / .w
I assume you’re on HDRP, therefore you cannot access the _CameraDepthTexture directly.
@elettrozero I just looked this function and use it in my shader code, it does’t work ,bug you do enlighten me ;), I searched
LOAD_TEXTURE2D and
LOAD_TEXTURE2D_LOD, found the right way to get depth, 4 ways can do this.
basicly use LOAD_TEXTURE2D, LOAD_TEXTURE2D_LOD and _CameraDepthTexture, _DepthPyramidTexture.
I found that _CameraDepthTexture and _DepthPyramidTexture seem to look the same, andf from the buffer name I guess that they are the same texture in fact.
They said they put the whole pyramid in the depth buffer texture and you should load the LOD you want but can you access one of the two variables in HDRP?