Matching vertex depth to camera depth

Hi,

Hoping I can get a definitive answer here, as I have been searching and working by trial and error on a problem for the last couple of days.

I’m trying to implement an off-screen render system for my particle system based on the method proposed here: Chapter 23. High-Speed, Off-Screen Particles | NVIDIA Developer

Basically I need to compare the depth of a vertex with the depth map already rendered by the camera, and clip fragments that should be occluded by opaque objects.

I have the pipeline set up and rendering just fine, but I am having trouble accurately clipping particles based on the camera depth texture.

I’ve tried many iterations of the macros listed here: Unity - Manual: Cameras and depth textures to get a sample from the camera depth texture to be in the same projected space as the vertex depth. My shader currently looks like this…

Shader "Particles/OffscreenRender" {
    Properties{

    }
        SubShader{
                LOD 100
                Cull Off

                ZWrite Off
                Tags{ "IgnoreProjector" = "True" "ForceNoShadowCasting" = "True" "DisableBatching" = "True" }
                Pass{
                        Blend[_SrcMode][_DstMode], One One
                        CGPROGRAM
                       
                        #pragma vertex vert
                        #pragma fragment frag
                        #pragma multi_compile_fwdbase nolightmap nodirlightmap nodynlightmap novertexlight
                        #pragma target 5.0
                        #include "UnityCG.cginc"
                       
                        /* Properties etc...*/
                       
                       
                        struct v2f
                        {
                            float4 pos : SV_POSITION;
                            float4 uv : TEXCOORD0;
                            float3 worldPos : TEXCOORD1;
                            float4 projPos : TEXCOORD2;
                            float4 color : TEXCOORD3;
                            float2 pID : TEXCOORD4;
                        };

                        struct VS_INPUT
                        {
                            float4 vertex        : POSITION;
                            float3 norm       : NORMAL;
                            float2 uv         : TEXCOORD0;
                            float4 col        : COLOR0;
                            uint   id         : SV_VertexID;
                        };

                        v2f vert(VS_INPUT v)
                        {

                            uint pIndex = v.id;
                            Particle p = Particles[pIndex];
                       
                            worldPosition = p.position;
                       
                            v2f o;
                            o.pos = UnityObjectToClipPos(worldPosition);
                           
                            v.vertex = o.pos;
                       
                            o.worldPos = worldPosition;
                           
                            o.projPos = ComputeScreenPos(o.pos);
                            o.projPos.z = COMPUTE_DEPTH_01;
                           
                            o.pID.x = pIndex;
                            o.color = float4(p.velocity, p.age);
                           
                            return o;
                        }


                        fixed4 frag(v2f i) : SV_Target
                        {

                            fixed4 output = _Color;

                            float sceneDepth = Linear01Depth(tex2Dproj(_CameraDepthTexture, i.projPos));
                            float particleDepth = i.projPos.z;

                            if(particleDepth > sceneDepth)
                                discard;
                           
                            return output;
                        }

                        ENDCG
                    }
                }
}

Which does not seem to be the correct way to project the depth of a vertex into the same space as the camera depth texture. I know there is a difference between eye depth and screen depth, but I am having a little trouble wrapping my head around the difference, and which macros return which, so more clarity there would be great.

Otherwise, would really appreciate help on this matter. I’ve found MANY forum posts with similar questions so it seems perhaps the docs are lacking, or I’m missing a fundamental bit of understanding. I’m super close, this is the last piece of the puzzle for me (for now…)

Thanks!
n

Here’s a free implementation of the proposed technique. I bet it can be used to dissect it and see how it works

Oh brilliant! I’ve even looked at this asset before, but it didn’t come to mind… Thanks for the tip!

So I downloaded that plugin, and it works fine… I’m still having trouble with my version…>

I’m using the same code in my shader to calculated the occlusion layer but for some reason it does not have the same effect.

I am using command buffers rather than a separate camera to render the off-screen particles (or off-screen cube in this case.) Beyond that I can’t think of what would be happening that is different…

The cube in scene view is just for sanity check, it is not getting rendered by camera, but by an off-screen command buffer.

Alright got it working with that plugin, I was accidentally calling ComputeScreenPos(v.vertex);

1 Like