The distance between the surface and the camera is the distance between the surface and the camera, not the depth. The depth is the distance from the camera to the view plane that point on the surface is on.
That’s also different from the non-linear depth that’s in the depth buffer & camera depth texture, which is different from the “linear 01” depth, which is different from the linear (aka linear eye) depth.
The way you get the linear depth from a world position, is by converting a world space into view space and getting the negative z, like this:
As a quirk of how projection matrices work, clipPos.w is equal to the linearDepth above, if you have a perspective camera.
But, depending on what exactly you’re doing, comparing the linearDepth value from the above to the LinearEyeDepth(sceneDepth) is probably your best bet.
One side note, you used the SRP form of the LinearEyeDepth function that also takes the _ZBufferParams as an input. If you’re writing this for one of the SRPs, you should use -TransformWorldToView(worldPostion).z instead of the above function. Both functions are equivalent to:
Thanks for the detailed response. The way I “solved” this was a little hacky and does this barely good enough. I will try to implement what youve posted.
I cannot thank you enough for this answer. I’ve trying all sort of maths to evaluate the depth. In my last attempt, I was trying to project the (vertex world position - camera world position) vector into the camera direction vector and use the magnitude of the projected vector as the depth (obviously remapping from (near, far) to (1, 0)), and it looked good but was not perfect in every single pixel, but your answer just made it all clear.
Sorry to necropost, but I can’t find anything about UnityWorldToViewPos in the docs. This seems to be exactly what I need to solve one of my own problems, though.
i saw many replies of your and they were really helpful, i’m making a raytracing algorithm for the Quest2 and thanks to you i was able to complete it, thank you even now!