So due to huge performance benefits of not using separate pass for Depth texture (especialy in VR) and only using it in late passes (transparent, post process) I switched to custom method using RWTexture2D (RGB is used for distortion etc.).
For some reason I am getting noise shimmering (I’m suspecting floating error or Z fighting).
For testing purposes I use this basic fragment shader, which checks previously added value and if the current pixel is closer, it will update the buffer, than return color saved at current pixel coordinates.
RWTexture2D<float4> textureBuffer : register(u1);
fixed4 frag (v2f i) : SV_Target
{
int2 pixelUV = UnityPixelSnap(i.pos);
//pixelUV = floor(i.screenPos.xy/i.screenPos.w * _ScreenParams.xy); // Same result as above
if(Linear01Depth(i.pos.z) - Linear01Depth(textureBuffer[pixelUV].a) < 0)
//if(textureBuffer[pixelUV].a < i.pos.z) // Same result as above
textureBuffer[pixelUV] = float4(0.1.xxx, i.pos.z);
return textureBuffer[pixelUV].a;
}
The buffer is cleared in compute shader (Dispatched at Camera.OnPreRender()) like so:
What are you actually trying to do? And why are you clearing it in a compute shader? That’s very odd. You should be using CommandBuffer.ClearRenderTarget()
You are overcomplicating this by a lot. Call SetTargetBuffers on your camera with both a color texture and a depth texture as inputs. You don’t have to manually clear the textures as they will be cleared automatically by the camera according to the camera clear flags. And this is a lot more optimized way to do it as well, especially if you are on mobile.
In OnPostRender, do camera.targetTexture = null. Then do Graphics.Blit(yourColorTexture, null)
(null in this case means it will be written straight to the screen)
SetTargetBuffers is a dream, but in VR there is an ongoing bug which render gray to the screen, so I am suck with my solution so far. Any idea what might cause the noise shimerring as seen in my first post?
You absolutely do not want to be converting the depth to linear to test the values against each other. Leave them exactly as they are and compare them against each other. Conversion will cause precision issues.
Depending on the platform, the depth values may be 1.0 near, 0.0 far, so you need to check #ifdef UNITY_REVERSED_Z for if you should check if the value is >= or <=. Similarly this changes if you’re clearing to 0.0 or 1.0.
Presumably you’re still using a depth buffer, and you’re adding that code to your opaque objects’ shaders so it renders them and writes to this “depth buffer” in the same pass? If so, you shouldn’t even need to do the test! It should already have been done by the depth test itself.
The big caveat to all of this is MSAA and write order. Random write targets in the fragment shader are tricky, especially when MSAA is involved. There’s no guarantee that the fragments within a single pixel will run in a nice orderly fashion, and may even be running in parallel, meaning different fragments may read, and then write to the same “pixel” at the same time (because they all passed the test before other fragments in the same pixel write to it)! This can mean the “wrong” data gets written, or in the worse (and unlikely) case the data gets corrupted. I don’t know of a way around this.
Last thing… I believe that compute shader is only clearing the corner 8x8 pixels.
I am not using prepass depth, at the end of each fragment pass opaque shader (only fwdBase), there is the code from my first post. It indeed renders them and writes to my RW buffer in the same pass:
int2 pixelUV = UnityPixelSnap(i.pos);
//pixelUV = floor(i.screenPos.xy/i.screenPos.w * _ScreenParams.xy); // Same result as above
if(textureBuffer[pixelUV].a < i.pos.z)
textureBuffer[pixelUV] = float4(0.1.xxx, i.pos.z);
If I am not testing, it writes full mesh, even with occluded parts.
This could potentially be the source. I would personally prefer unresolved multisampled RW buffer anyway, any thoughts if it is possible to access per subpixel depth?
It doesn’t seems to me, as it works as expected (except the noise).
That’s not what I was asking about. The depth buffer and depth texture (generated by a depth pre-pass) are entirely separate things when using the forward renderer … which is specifically the issue you’re trying to resolve. Forward rendering still requires a depth buffer to handle opaque z sorting, and ideally you’d just copy that to a texture once the forward opaques have finished rendering.
Unity has functionality to copy a depth buffer to a texture, that’s how the depth texture is created, but it’s not exposed to c# even after several of us early VR devs asking for it for exactly the reason you’re trying to work around now. It was eventually exposed as something for the SRP, but not the built in render pipeline. Part of the problem was Unity was missing multi-sample texture sampling, and all multi-sample textures were auto-resolved by the GPU, which is bad for depth textures. That didn’t get added until they were well into the SRP development and BIRP dev had been functionally abandoned. I believe there are some URP or HDRP branches that have post-opaque pass depth texture resolves working, though I don’t think it’s in the main branches yet.
What happens if you add [earlydepthstencil] to your shaders just above the frag function? I wonder if writing to the RW buffer disables early depth rejection.
Nope. You’d need to resolve the depth buffer to a texture directly. The depth the fragment shader has will not even match any value the depth buffer has in its subsample depth values since they’re not at the same subpixel position!
I guess it depends on how you’re calling dispatch from c#.
One thing that bugs me is you should be assigning a render texture to the textureBuffer to be the object the RWTexture2D is reading/writing to. And you should be able to call clear on that.