depthBuffer to worldPos in HDRP

Hi all, I found myself stuck trying to convert a depth value to worldPosition in HDRP, I have the following code working in URP but no luck so far in HDRP:

float depth = tex2Dlod(myTexture, bufferTexCoord).w;
float4 posCS = float4(bufferTexCoord.xy * 2.0 - 1.0, depth, 1.0);
// float4 posInvProj = mul(UNITY_MATRIX_I_VP, posCS);// URP
float4 posInvProj = mul(_InvViewProjMatrix, posCS);
float3 worldPos = posInvProj.xyz / posInvProj.w;

Any help on this would be much appreciated!

EDIT:
I should add that the final scope is to output depth again, and that might as well be where the problem is. HDRP Shader Graph and Amplify want depth in the form of an offset, so here what I do:

outDepth = - length(mul(UNITY_MATRIX_M, float4(vertexPos,1)).xyz - worldPos);

The result is slightly wrong though, likely to do with camera relative rendering.

The URP and HDRP both have built in functions to convert the depth buffer to a world postiions … though … well … this is the code that works:

float3 worldPos = ComputeViewSpacePosition(screenUV, rawDepth, UNITY_MATRIX_I_VP);
worldPos.z = -worldPos.z; // wat?
worldPos = GetAbsolutePositionWS(worldPos);

It seems like there’s a bug with the built in function, because it’s wrong without inverting the world position z. And I honestly can’t figure out why.

However, looking at the code that ComputeViewSpacePosition function calls it’s almost what you’re already doing. There’s was one thing missing, and weirdly even though this matches Unity’s ComputeViewSpacePosition() code as best I can tell, it doesn’t need the inverted z.

float depth = tex2Dlod(myTexture, bufferTexCoord).w;
float4 posCS = float4(bufferTexCoord.xy * 2.0 - 1.0, depth, 1.0);
#if UNITY_UV_STARTS_AT_TOP
    posCS.y = -posCS.y;
#endif
float4 posInvProj = mul(UNITY_MATRIX_I_VP, posCS);
float3 worldPos = posInvProj.xyz / posInvProj.w;
float3 absWorldPos = GetAbsolutePositionWS(worldPos);

You can then use that absolute world position however you need. The GetAbsolutePositionWS just doesn’t do anything for the URP.

As for the depth offset, you need the depth, not the length.

float worldPosLinearDepth = LinearEyeDepth(depth, _ZBufferParams);
// the w of clip space is the linear view depth
offsetDepth = worldPosLinearDepth - TransformObjectToHClip(vertexPos).w;
1 Like

Just wanted to say thank you once more here, I’m going through your notes trying to make sense of it all.
My case is quite involved due to instanced mesh drawing on a custom pass, it seems that HDRP adds a bit of changes there too, like the center of the bounding box offsetting everything.

Hello again, asking again for help because one detail left driving me nuts:
The offsetDepth in HDRP is almost working with the above suggestion, but only if I take my camera far enough from the object, otherwise if I zoom in the depth becomes distorted, almost like a fisheye effect.
I realize there’s lots of things involved on my side, just checking back in case I missed something obvious.

Can you show more of the code? A “fish eye” effect is a good sign somewhere you’re using the view distance instead of the view depth.

Hello (and thanks for having a look at this!), I tried to isolate the code best way I could, again this is all working fine in URP with the same code, so I aside from the final hdrpDepthOffset, everything else should be correct, hopefully.

// first pass renders an object into a depth texture
float4x4 modelMatrix = mul(UNITY_MATRIX_M, myObjectMatrix);
float4 hitDepthPointWS = mul(modelMatrix, float4(myHitPoint, 1));
float4 hitDepthPointScreen = mul(UNITY_MATRIX_VP, hitDepthPointWS);
float outDepth = saturate(hitDepthPointScreen.z / hitDepthPointScreen.w);

// a subsequent pass will read the depth texture and do the final shade on screen and output hdrpDepthOffset.
// this is like rendering impostors, the mesh here is a box 
float4 screenTexCoord = float4(screenPos.xy / screenPos.w, 0, 0);
float depth = tex2Dlod(depthTex, screenTexCoord).x;
float worldPosLinearDepth = LinearEyeDepth(depth, _ZBufferParams);
float hdrpDepthOffset = worldPosLinearDepth - TransformObjectToHClip(float4(vertexPos,1)).w;

I’m attaching a screenshot to show the weird depth I get, the one colored in red is my cube failing to output a proper depth value. The result is wrong when I zoom in with the camera, but looks fine when I zoom out.

Hi. This is what I use in URP/HDRP to create a height fog effect based on depth in Amplify. Not sure if it helps in your case, or what is your use case, but it works fine for me. It is based on a tutorial from this website, but I can’t find it: https://cyangamedev.wordpress.com/

You can copy-paste this code in ASE: http://paste.amplify.pt/view/raw/abc1570c
I hope it helps :slight_smile:

thanks! I’ll give it a shot in code.

1 Like