So I took the Unity’s command buffer example project (the blurred glass part of it) and I want to modify it so that objects behind the glass appear blurred differently, depending on their distance from the glass: the closer they are, the less blurred they look. Here is the effect I want to achieve:
The way I want to do it is to render the container walls with the effect shader and sample the main camera depth texture, holding the pasta depth, to get the depth difference, which will then be used as a lerp factor. The problem is: I cannot reliably compute the depth difference between the container wall and the depth buffer. Whatever I do, my depth difference always changes, depending on camera position and I don’t know how to prevent it. Here is how I compute the depth:
v2f vert (appdata_t v)
{
v2f o;
o.vertex = UnityObjectToClipPos(v.vertex);
o.projPos = float4(0, 0, 0, 0);
COMPUTE_EYEDEPTH(o.projPos.z);
#if UNITY_UV_STARTS_AT_TOP
float scale = -1.0;
#else
float scale = 1.0;
#endif
o.uvgrab.xy = (float2(o.vertex.x, o.vertex.y*scale) + o.vertex.w) * 0.5;
o.uvgrab.zw = o.vertex.zw;
o.uvmain = TRANSFORM_TEX( v.texcoord, _MainTex );
UNITY_TRANSFER_FOG(o,o.vertex);
o.screenUV = ComputeScreenPos(o.vertex);
return o;
}
half4 frag (v2f i) : SV_Target
{
float2 uv = i.screenUV.xy / i.screenUV.w;
float depth = SAMPLE_DEPTH_TEXTURE(_CameraDepthTexture, uv);
depth = Linear01Depth(depth);
return (i.projPos.z - depth) * _DepthPower;
...
I then thought: ok, maybe depth is stored in some weird way in the depth buffer, let me try to compute the distance between two world space points for every pixel:
v2f vert (appdata_t v)
{
v2f o;
o.vertex = UnityObjectToClipPos(v.vertex);
#if UNITY_UV_STARTS_AT_TOP
float scale = -1.0;
#else
float scale = 1.0;
#endif
o.uvgrab.xy = (float2(o.vertex.x, o.vertex.y*scale) + o.vertex.w) * 0.5;
o.uvgrab.zw = o.vertex.zw;
o.uvmain = TRANSFORM_TEX( v.texcoord, _MainTex );
UNITY_TRANSFER_FOG(o,o.vertex);
o.screenUV = ComputeScreenPos(o.vertex);
float4 clip = float4(o.vertex.xy, 0.0, 1.0);
o.worldDirection = mul(clipToWorld, clip) - _WorldSpaceCameraPos;
o.projPos = mul(clipToWorld, float4(o.vertex.xyz / o.vertex.w, 1.0));
return o;
}
half4 frag (v2f i) : SV_Target
{
float2 uv = i.screenUV.xy / i.screenUV.w;
float depth = SAMPLE_DEPTH_TEXTURE(_CameraDepthTexture, uv);
depth = Linear01Depth(depth);
float3 worldspaceDepth = i.worldDirection * depth + _WorldSpaceCameraPos;
return length(worldspaceDepth- i.projPos.xyz) * _DepthPower;
...
Still the final result is dependent on camera position regardless of _DepthPower. When I move the camera: the entire scene goes from black to white or reverse. Is there any way to reliably get a stable difference between two depths and to map them into 0…1 range?