I am having trouble computing the world position from the Depth Buffer. In other words, doing a “Depth Inverse Projection” (as in this well known example) in Shader Graph.

I have tried doing it in two different ways, one is multiplying ‘Eye Depth’ with a normalized World Space view direction.

Can someone spot what’s wrong with my Shader Graphs?

I would just use a Custom Function node and copy keijiro’s code, but I don’t know how to add the required code (computing the view ray) to the Vertex Function.

Without diving into the issue I’m not sure why it is not working, I had similar issues when I was working to do the same thing in the Boat Attack demo.

Here is my main graph, the custom node is just doing:

#if (UNITY_REVERSED_Z == 1)
Out = 1;
#else
Out = 0;
#endif

@noio you were really close in your first attempt. The main thing is you shouldn’t be normalizing the view direction, you should be flattening it in the camera forward direction.

The depth texture is the depth from the camera, ie: the distance from the camera along the view direction. So we need a view direction vector where the z in view space is at a constant 1. We can do that with a dot product of the world view dir and the camera’s forward dir to get the depth of the vector in view space, and then divide by that depth giving us a wold space view vector that has a view space z of 1. Multiply that by the depth texture, and add the camera position, and we’re done.

(This is the shader on a giant quad sitting in the world in front of the camera. Not attached to the camera or anything fancy, just hand placed to be between the camera and the “scene”.)

@Andre_Mcgrail 's approach is far more complicated because he’s having to reconstruct a projection space position. The depth texture isn’t in projection space, it’s in window space, so you have to jump through a couple of hoops to convert from window space to projection space so that the Inverse View Projection matrix is expecting, plus all the stuff to deal with the fact that OpenGL and everything else handles projection space differently. The advantage of this technique is it works better for post processing where you can’t rely on the view direction node … but this is for Shader Graph and you can’t use Shader Graph shaders for post processing (or shouldn’t).

Thanks for the amazing answer, @bgolus ! This makes a lot of sense. Had to get my brain around the different effect of a normalized view direction and “a view direction vector where the z in view space is at a constant 1”. But it makes sense when I draw it out.

Even though that’s kind of exactly what I’m doing when I use this effect on a full screen quad in front of the camera to add fog to the scene… I did that through a custom PP effect before.

This is an actual quad positioned in front of the camera, not actually a Blit(), correct? That’s kind of the big difference. A Blit() does weird things with the camera’s projection and world transform matrices so the “view direction” node won’t work properly there.

Yes. I had to modify the example “FullScreenQuadPass” because that one sets the View & Projection matrix to Identity and breaks the view dir. I keep the matrices and (badly) figure out where to hover the quad in front of the camera.

var cmd = CommandBufferPool.Get(ProfilerTag);
// cmd.SetViewProjectionMatrices(Matrix4x4.identity, Matrix4x4.identity);
var camTransform = camera.transform;
// Create a transformation that hovers the Quad 1 unit in front of the camera.
var fullScreenQuadMatrix = Matrix4x4.TRS(camTransform.TransformPoint(Vector3.forward), camTransform.rotation, Vector3.one);
cmd.DrawMesh(RenderingUtils.fullscreenMesh, fullScreenQuadMatrix, _material, 0, _materialPassIndex);
// cmd.SetViewProjectionMatrices(camera.worldToCameraMatrix, camera.projectionMatrix);
context.ExecuteCommandBuffer(cmd);

Good, I was afraid there might be some other reason why you shouldn’t (Like performance or compatibility)

I’m finding this thread years later and doing exactly that. I’m trying to get SSAO in an unlit shader (Forward SSAO doesn’t work with unlit shaders in URP, at least it doesn’t for me). I stumbled across this article and it’s super handy.

I have a render objects that overrides the opaque geo with a normals material, then, in a shader pulling the _CameraOpaqueTexture (which is my normals ‘pass’). Combining this with 4 samples (using UV offset and tile on a screen UV node) looks like this:

I am rendering this on a quad on a render objects before postproccessing, and it’s almost working:

However, obviously the depth is not translating correctly because connecting just the output of the DepthToWorldPos (which I copied from you @bgolus ) results in this. What do you think is going wrong?

The above graph was made with one of the last versions of the LWRP, so I wasn’t sure if there might not be some bug with the graph on newer SRPs.

Nope. Copying that shader directly into the latest URP and HDRP and it works. So there might be something not quite matching the graph I posted above. Here’s a zip of that exact shader.

Sampling two pixels, N is original normal, V is direction in world space between the two pixels, D is distance). Occlusion = max( 0.0, dot( N, V) ) * ( 1.0 / ( 1.0 + d )) if you don’t want to look at the article.

This results in:

The thing is, there’s a bunch of noise on the flat surfaces - this is meant to be taken out by the Dot product (the vector between two close pixels on the same surface, in world space, would return 0 as a Dot with the surface normal, being perpindicular). Any ideas why this is goofing like this?

The method in the above shader graph for reconstructing a world position from the depth texture only works when using the depth from the current pixel. It will not calculate correct world positions for depth at other pixel positions. You need a different method for reconstructing the world position to do that.

Hi! Do you by any chance know how to do this but with a lower mip level of the depth? The Scene Depth node contains an atlased depth pyramid like this:
Depth

This can be seen if tiling or/and offset is set to a custom value. The default screen position is set up to always show the depth at mip0, but maybe there’s a way to use a lower mip-map levels as well?

Nope, no idea. I had no idea they were doing a depth pyramid now (though it’s a useful optimization for certain effects). If they’re doing that, presumably they’re using it someplace. You’d have to find where it’s being used and look at the code they have to sample those lower mip levels. Worse case it’s something they’re controlling from c# in which case there’s not any way to do it. Best case there’s a built in function for getting the UV ranges for each mip, or maybe an array someplace.

I am actually using screen FOV to do a TAN on the angle towards pixel (multiplied by the eye depth. Actually works perfectly if you can believe it. This gives me my different samples. My problem does seem to be in the occlusion calculation specifically, but I’ll copy your screen to world pos into my shader instead and see if it works.

Hi! How can I apply a shader like this to the main camera in the HDRP? I have similar functionality implemented for the basic pipeline. For the basic pipeline, I can use something like Graphics.Blit(src, dst, Material), but it does not work for HDRP as it does not have OnRenderImage function. What is the workflow to apply a material to the camera rendered texture and display the result using HDRP?

Sorry for rebooting old post but i don’t understand the math here.

Why can’t we just multiply the camera direction by the scene depth in Eye space and add that to the camera position?

Since the scene depth node in Eye space will return a positive true distance from camera and “direction” from Camera node is normalised wouldn’t this work:

worldPos = camPos + camDir * depthEye

It’s the dot product of the view direction that i can’t seem to visually understand why we need that?

First, the Scene Depth node set to Eye Space returns a world scale depth, not a distance. But even if it did return a distance it wouldn’t help.

The Camera node’s Direction output is the forward vector of the game object. If the Camera game object has no rotation, then the value is float3(0.0, 0.0, 1.0) for all pixels across the screen. It does not have any information about what direction the current pixel is from the camera.

Here’s the image I always repost when the topic of depth and distance comes up.

The above graphic depicts the different between the distance and depth for a single point visible to the camera. Something moving at a fixed distance from the camera moves along a sphere. Something moving at a fixed depth from the camera moves along a flat plane parallel to the camera’s view plane.

So the vector we need to reconstruct the world position from the depth is one that represents a point on a plane at a unit depth from the camera. And that’s what that dot product and divide give us.

The dot product of an arbitrary vector and a normalized vector returns the magnitude of the arbitrary vector along the normalized vector. In a simple case, imagine the normalized vector is float3(0,0,1) and the arbitrary vector is float3(-2, 4, 10). The dot product of those two is a value 10, because the arbitrary vector is 10 units along the normalized vector’s direction. Dividing the arbitrary vector by the dot product rescales it so it’s 1 unit along the normalized vector. This works for any normalized vector direction. If the normalized vector is the camera’s forward direction, that new vector has a Z depth of 1 unit, but also the direction away from the camera. And we can multiply that by the depth texture’s value at that pixel to reconstruct the original position.

I tried to use your shadergraph setup but in code and it didn’t work which is why I assumed it was wrong at first. Yet for some reason it only works in shadergraph but not in a custom code function node… this is what I equated it to for code passing in the same node links to my custom function:

If i use the code version my caustics are all kind’ve messed up. But if i use the regular node setup that you provided it works fine. But i don’t understand why that is…