direction from camera to pixel is slightly shifted around the edges of the screen

Hi, Im trying to get direction from camera to pixel but It is slightly shifted around the edges of the screen in comparison to default mesh objects thus causing artifacts. I have no idea why it would do that.

struct appdata {
                float4 vertex : POSITION;
                float2 uv : TEXCOORD0;
            };

            struct v2f
            {
                float4 pos : SV_POSITION;
                float3 wPos : TEXCOORD1;
                float2 uv : TEXCOORD0;
            };

            v2f vert(appdata v)
            {
                v2f o;
                o.pos = UnityObjectToClipPos(v.vertex);
                o.uv = v.uv;
                o.wPos = mul(unity_ObjectToWorld, v.vertex).xyz;

                return o;
            }
            fixed4 frag (v2f i) : SV_Target
            {
                float3 viewDirection = normalize(i.wPos - _WorldSpaceCameraPos);
}

What do you mean by “slightly shifted”? Can you show an example of what you’re seeing, and describe what you expect to see?

https://www.youtube.com/watch?v=2uhMMghRMfE

I expect culling to work. I don’t think problem is with depth but with direction and it renders somehow with two different perspectives?? I really have no idea why it is not working

float depth = tex2D(_CameraDepthTexture, float2(i.pos.x / _ScreenParams.x, i.pos.y / _ScreenParams.y));
                depth = LinearEyeDepth(depth);

Ah. I think I understand what’s going on.

Depth is not the same thing as distance.

Distance is, well, the distance between the camera and the position you’re measuring against.
Depth is the distance from the camera to a plane that’s parallel to the view plane that intersects with the position you’re measuring against. If you add a game object as a child of a camera, it’s transform node’s position is relative to the camera position and orientation, and the “z” position is effectively the linear depth, it only changes when an object moves along that camera relative axis.

So you’re either taking the linear eye depth and multiply it by the normalized view direction to try to reconstruct a world position, or you’re using that normalized view direction to offset the plane’s position. Either way you’re going to be mixing depth and distance values, which will result in the weirdness you’re seeing.

If you’re multiplying the depth by the normalized view direction, what you want to do instead is get a vector that’s at view z depth of 1 and multiply that by the depth to get the camera relative world position.

float3 viewDirection = normalize(i.wPos - _WorldSpaceCameraPos);
float3 depthRay = viewDirection / dot(viewDirection, -UNITY_MATRIX_V[2].xyz);
float3 depthTextureWorldPos = _WorldSpaceCameraPos + depthRay * linearEyeDepth(rawDepth);

That UNITY_MATRIX_V[2].xyz is the camera’s normalized forward vector. It’s negative because Unity uses OpenGL’s funky -z forward view space for rendering. Doing a dot product of that forward vector with the view direction ray gets you the depth of the ray in camera space. And dividing by that results in a view space z depth of 1. And multiplying that by the depth gets you the camera relative world space position.

At least when using a perspective camera.

This article might also be useful to you:

1 Like

Thank you so much. You are the best

This was a super-useful conversation for me! In case it helps, here’s a slightly different formulation that helped me understand what the math is doing.

Here’s a picture of the difference between depth and distance:

We get depth from reading _CameraDepthTexture and converting it into view/eye space, but in order to get the worldspace position of A, what we really want is the distance value. How do we get that?

We have a couple other values available to us. We’ve got the view direction, and we also know the forward vector for the camera. If I add those to the picture, it looks like this:

For clarity, v is the view direction, and f is the camera’s forward vector.

If you squint, you’ll notice that we have two similar triangles: the first has distance and depth as edges, and the other has v and f as edges. If we can figure out the ratio of v to f, then we can get the ratio between the distance and depth values. In other words:

7604767--944275--Page2.jpeg

Since v is normalized, then the above is the same thing as:

7604767--944278--Page2.jpeg

What’s the magnitude of f? It’s the length of the portion of v that is pointing in the same direction as the camera’s forward vector. “length of portion of unit vector A that points in the same direction as vector B” is exactly what the dot product gives us.

Here it is in code:

float depth = linearEyeDepth(rawDepth);
float3 viewDirection = normalize(i.wPos - _WorldSpaceCameraPos);
float3 cameraForward = -UNITY_MATRIX_V[2].xyz;
float dist = depth / dot(viewDirection, cameraForward);
float3 depthTextureWorldPos = _WorldSpaceCameraPos + viewDirection * dist;

It’s a slight re-arrangement to @bgolus 's math, but this is the way it clicked for me.

3 Likes