Hello,
I’m trying to create a fake skybox room effect that requires me to write to and sample a bunch of depth buffers but I’m running into an issue with the accuracy of the depth buffer sample. The problem is also a lot more noticable on smaller screen resolutions
This video shows the problem I’m having.
According to this post [converting-depth-values-to-distances-from-z-buffer] you can accurately calculate the distance to the depthbuffer by calculating the viewDirection in the vertex shader and making use of the interpolation to then get the proper distance in the fragment shader.
But because I’m not using a post processing script I cannot use the interpolation step.
I have to do this custom depth check because I have to disable the ZWrite for this script and there are more clipping checks that I removed from the script to focus on the issue I’m having. Here’s a video that shows what I’m trying to do.
struct appdata {
float4 vertex : POSITION;
};
struct v2f {
float4 vertex : SV_POSITION;
float3 worldPosition : TEXCOORD0;
float4 screenPosition : TEXCOORD2;
};
v2f vert (appdata v) {
v2f output;
output.vertex = UnityObjectToClipPos(v.vertex);
output.worldPosition = mul(unity_ObjectToWorld, v.vertex).xyz;
output.screenPosition = ComputeScreenPos(output.vertex);
return output;
}
float4 frag (v2f input) : SV_Target {
float distanceToFragment = length(input.worldPosition - _WorldSpaceCameraPos);
float2 screenUV = input.screenPosition.xy / input.screenPosition.w;
//I think this is the problem.
float4 direction = mul(unity_CameraInvProjection, float4(screenUV * 2 - 1, 1, 1));
float roomDepthSample = Linear01Depth (SAMPLE_DEPTH_TEXTURE(roomDepthTexture, screenUV));
float3 roomViewPos = (direction.xyz / direction.w) * roomDepthSample;
float distanceToRoomDepthBuffer = length(roomViewPos);
bool isInitialWall = distanceToRoomDepthBuffer > distanceToFragment - 0.001;
if(!isInitialWall)
clip(-1);
float4 color = texCUBE(_MainTex, normalize(input.worldPosition - _WorldSpaceCameraPos.xyz));
return color;
}
These are the settings I’m using for the depthbuffer.
At one point I thought there might be an issue with anti aliasing or something so I disabled all the anti aliasing on all cameras and rendertextures but the issue remained. These textures are remade whenever the screen resolution changes to match the new screen size.
renderTexture = new RenderTexture(Screen.width, Screen.height, 32, RenderTextureFormat.Depth);
renderTexture.depthStencilFormat = GraphicsFormat.D32_SFloat;
renderTexture.filterMode = FilterMode.Point;
renderTexture.wrapMode = TextureWrapMode.Clamp;
renderTexture.Create();
Shader.SetGlobalTexture("roomDepthTexture", renderTexture);
I only ever have to do comparion checks with the depthbuffer.
My question is: How do I more accurately get the world distance from the depth buffer or is there a different/better way to compare the current distance to that of a depth buffer using a mesh rendering shader?