I wrote a shader that would draw the scene like a depth map. It works, but I am having a hard time wrapping my head around why I can call SAMPLE_DEPTH_TEXTURE(_CameraDepthTexture, i.uv) like:
struct v2f {
float2 uv : TEXCOORD0;
float4 pos : POSITION;
};
v2f vert (appdata v) {
v2f o;
o.vertex = UnityObjectToClipPos(v.pos);
o.uv = v.uv;
return o;
}
//... Code
fixed4 frag (v2f i) : SV_Target {
// ... more Code
float depth = SAMPLE_DEPTH_TEXTURE(_CameraDepthTexture, i.uv);
// ... return ....
}
and have the shader render the color of the fragments into based on their position on the screen.
My assumption would be that the UV coordinate are based in “texture space,” and that the _CameraDepthTexture is rendered onto the screen?