Hi all, I found myself stuck trying to convert a depth value to worldPosition in HDRP, I have the following code working in URP but no luck so far in HDRP:
EDIT:
I should add that the final scope is to output depth again, and that might as well be where the problem is. HDRP Shader Graph and Amplify want depth in the form of an offset, so here what I do:
It seems like there’s a bug with the built in function, because it’s wrong without inverting the world position z. And I honestly can’t figure out why.
However, looking at the code that ComputeViewSpacePosition function calls it’s almost what you’re already doing. There’s was one thing missing, and weirdly even though this matches Unity’s ComputeViewSpacePosition() code as best I can tell, it doesn’t need the inverted z.
You can then use that absolute world position however you need. The GetAbsolutePositionWS just doesn’t do anything for the URP.
As for the depth offset, you need the depth, not the length.
float worldPosLinearDepth = LinearEyeDepth(depth, _ZBufferParams);
// the w of clip space is the linear view depth
offsetDepth = worldPosLinearDepth - TransformObjectToHClip(vertexPos).w;
Just wanted to say thank you once more here, I’m going through your notes trying to make sense of it all.
My case is quite involved due to instanced mesh drawing on a custom pass, it seems that HDRP adds a bit of changes there too, like the center of the bounding box offsetting everything.
Hello again, asking again for help because one detail left driving me nuts:
The offsetDepth in HDRP is almost working with the above suggestion, but only if I take my camera far enough from the object, otherwise if I zoom in the depth becomes distorted, almost like a fisheye effect.
I realize there’s lots of things involved on my side, just checking back in case I missed something obvious.
Hello (and thanks for having a look at this!), I tried to isolate the code best way I could, again this is all working fine in URP with the same code, so I aside from the final hdrpDepthOffset, everything else should be correct, hopefully.
// first pass renders an object into a depth texture
float4x4 modelMatrix = mul(UNITY_MATRIX_M, myObjectMatrix);
float4 hitDepthPointWS = mul(modelMatrix, float4(myHitPoint, 1));
float4 hitDepthPointScreen = mul(UNITY_MATRIX_VP, hitDepthPointWS);
float outDepth = saturate(hitDepthPointScreen.z / hitDepthPointScreen.w);
// a subsequent pass will read the depth texture and do the final shade on screen and output hdrpDepthOffset.
// this is like rendering impostors, the mesh here is a box
float4 screenTexCoord = float4(screenPos.xy / screenPos.w, 0, 0);
float depth = tex2Dlod(depthTex, screenTexCoord).x;
float worldPosLinearDepth = LinearEyeDepth(depth, _ZBufferParams);
float hdrpDepthOffset = worldPosLinearDepth - TransformObjectToHClip(float4(vertexPos,1)).w;
I’m attaching a screenshot to show the weird depth I get, the one colored in red is my cube failing to output a proper depth value. The result is wrong when I zoom in with the camera, but looks fine when I zoom out.
Hi. This is what I use in URP/HDRP to create a height fog effect based on depth in Amplify. Not sure if it helps in your case, or what is your use case, but it works fine for me. It is based on a tutorial from this website, but I can’t find it: https://cyangamedev.wordpress.com/