Hi,
Hoping I can get a definitive answer here, as I have been searching and working by trial and error on a problem for the last couple of days.
I’m trying to implement an off-screen render system for my particle system based on the method proposed here: Chapter 23. High-Speed, Off-Screen Particles | NVIDIA Developer
Basically I need to compare the depth of a vertex with the depth map already rendered by the camera, and clip fragments that should be occluded by opaque objects.
I have the pipeline set up and rendering just fine, but I am having trouble accurately clipping particles based on the camera depth texture.
I’ve tried many iterations of the macros listed here: Unity - Manual: Cameras and depth textures to get a sample from the camera depth texture to be in the same projected space as the vertex depth. My shader currently looks like this…
Shader "Particles/OffscreenRender" {
Properties{
}
SubShader{
LOD 100
Cull Off
ZWrite Off
Tags{ "IgnoreProjector" = "True" "ForceNoShadowCasting" = "True" "DisableBatching" = "True" }
Pass{
Blend[_SrcMode][_DstMode], One One
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#pragma multi_compile_fwdbase nolightmap nodirlightmap nodynlightmap novertexlight
#pragma target 5.0
#include "UnityCG.cginc"
/* Properties etc...*/
struct v2f
{
float4 pos : SV_POSITION;
float4 uv : TEXCOORD0;
float3 worldPos : TEXCOORD1;
float4 projPos : TEXCOORD2;
float4 color : TEXCOORD3;
float2 pID : TEXCOORD4;
};
struct VS_INPUT
{
float4 vertex : POSITION;
float3 norm : NORMAL;
float2 uv : TEXCOORD0;
float4 col : COLOR0;
uint id : SV_VertexID;
};
v2f vert(VS_INPUT v)
{
uint pIndex = v.id;
Particle p = Particles[pIndex];
worldPosition = p.position;
v2f o;
o.pos = UnityObjectToClipPos(worldPosition);
v.vertex = o.pos;
o.worldPos = worldPosition;
o.projPos = ComputeScreenPos(o.pos);
o.projPos.z = COMPUTE_DEPTH_01;
o.pID.x = pIndex;
o.color = float4(p.velocity, p.age);
return o;
}
fixed4 frag(v2f i) : SV_Target
{
fixed4 output = _Color;
float sceneDepth = Linear01Depth(tex2Dproj(_CameraDepthTexture, i.projPos));
float particleDepth = i.projPos.z;
if(particleDepth > sceneDepth)
discard;
return output;
}
ENDCG
}
}
}
Which does not seem to be the correct way to project the depth of a vertex into the same space as the camera depth texture. I know there is a difference between eye depth and screen depth, but I am having a little trouble wrapping my head around the difference, and which macros return which, so more clarity there would be great.
Otherwise, would really appreciate help on this matter. I’ve found MANY forum posts with similar questions so it seems perhaps the docs are lacking, or I’m missing a fundamental bit of understanding. I’m super close, this is the last piece of the puzzle for me (for now…)
Thanks!
n
