How to sample Depth Texture for Unwrapped Mesh?

I unwrapping the mesh vertices to paint on it, and sample the depth texture to determine if the brush is occluded by other objects. But for some reason I get the wrong uv coordinates when sampling the depth texture:
9748363--1394983--depth.jpg

How do I get the correct UV for Depth Texture sampling for an unwrapped mesh?

Shader

Shader "Custom/Renderer World 1"
{
    Properties { }

    SubShader
    {
        Tags {"Queue"="Transparent" "IgnoreProjector"="True" "RenderType"="Transparent" "PreviewType"="Plane"}
        Cull Off Lighting Off ZWrite Off ZTest Off//Always

        Pass
        {
            BlendOp Add, Add
            Blend SrcAlpha OneMinusSrcAlpha, SrcAlpha One             

            CGPROGRAM
            #pragma vertex vert
            #pragma fragment frag

            #include "UnityCG.cginc"

            float4x4 _Matrix_IVP;
            UNITY_DECLARE_DEPTH_TEXTURE(_CameraDepthTexture);

            struct appdata
            {
                float4 vertex : POSITION;
                float2 uv : TEXCOORD0;
            };

            struct v2f
            {
                float4 vertex : SV_POSITION;
                float2 uv : TEXCOORD0;
                float4 worldPos : TEXCOORD1;
                float4 screenPos : TEXCOORD2;
            };

            v2f vert (appdata v)
            {
                v2f o;
                o.vertex = float4(float2(1, _ProjectionParams.x) * (v.uv.xy * float2(2, 2) - float2(1, 1)), 0, 1);                 
                o.uv = v.uv;
                o.worldPos = mul(unity_ObjectToWorld, v.vertex);
                o.screenPos = ComputeScreenPos(UnityObjectToClipPos(v.vertex));
                return o;
            }

            fixed4 frag (v2f i) : SV_Target
            {
                float2 screenUV = i.screenPos / i.screenPos.w;
                float depth = SAMPLE_DEPTH_TEXTURE(_CameraDepthTexture, screenUV);

                #if UNITY_REVERSED_Z
                    if (depth < 0.0001)
                        return 0;
                #else
                    if (depth > 0.9999)
                        return 0;
                #endif

                float4 positionCS = float4(screenUV * 2.0 - 1.0, depth, 1.0);

                #if UNITY_UV_STARTS_AT_TOP
                // positionCS.y = -positionCS.y;
                #endif
               
                float4 hpositionWS = mul(_Matrix_IVP, positionCS);
                float3 worldPos = hpositionWS.xyz / hpositionWS.w;
                float sceneZ = LinearEyeDepth(depth);
                return float4(worldPos, sceneZ);
            }
            ENDCG
        }
    }
}

C# Code

var mainCamera = Camera.main;
var projectionMatrix = GL.GetGPUProjectionMatrix(mainCamera.projectionMatrix, false);
var inverseViewProjectionMatrix = (projectionMatrix * mainCamera.worldToCameraMatrix).inverse;
rendererMaterial.SetMatrix("_Matrix_IVP", inverseViewProjectionMatrix);

Unless you’re using a geometry shader, ComputeScreenPos() should always take whatever value you’re outputting to SV_POSITION. In this shader, that’s the o.vertex.

o.screenPos = ComputeScreenPos(o.vertex);

Thank you! In this case I’m getting next result:
9749893--1395328--scene_view.jpg
UV:
9749893--1395331--uv_view.png

Is there a way to get the correct world position from the Depth Texture for an unwrapped mesh?

Ah, that’s a different question.

Sampling the depth texture gets you the nearest visible opaque depth for the camera that the depth texture was rendered from. For an unwrapped mesh, that’s not really useful since there’s nothing mapping the depth back to any particular mesh, let alone that mesh’s UVs. So to more accurately answer your original question, no, it’s impossible to correctly sample the depth texture from an unwrapped mesh.

That doesn’t mean you can’t get the world position of an unwrapped mesh. Just that you don’t need the depth texture to do that. You only need a depth texture to get the depth / world position of other things in the scene that aren’t the mesh currently being rendered.

Your best option is actually going to be to output the local position of the mesh in the unwrap, and then apply an object to world transform to that. For that you just need to pass along the original i.vertex.xyz from the vertex shader to the fragment shader and output that as is.

Thank you!

My goal is render the brush sample (texture) as a spray tool without painting the back side / invisible parts of the mesh. I suppose that using Depth Texture is required for that, am I missing something?

Could you please confirm if I understand correctly?

v2f vert (appdata v)
{
    v2f o;
    o.vertex = float4(float2(1, _ProjectionParams.x) * (v.uv.xy * float2(2, 2) - float2(1, 1)), 0, 1);
    //o.vertex = UnityObjectToClipPos(v.vertex);
    o.uv = v.uv;
    o.worldPos = mul(unity_ObjectToWorld, v.vertex);
    o.screenPos = ComputeScreenPos(o.vertex);
    return o;
}

Mostly, yes. Though you no longer need the screen position since you won’t be using any of the camera textures.

Also depending on how you plan on using this texture, you may not want to apply the object to world transform now, but later when sampling the output render texture.