(Copied from r/Unity3D because no one replied me there)
Hello, I’m trying to copy the depth buffer to camera target after post-processing in my render pipeline, but instead it’s just black. Thus, the gizmos (that I draw through UnityEditor.Handles.DrawGizmos btw) aren’t affected by depth unless I disable post-processing. I render the depth buffer to a different texture for shaders to use, but that’s about it. Also there’s an odd thing is that when I draw depth of field (which is in second pass btw) the buffer does get copied to that texture according to frame debugger, but when I don’t render it it does the same to a different effect. I’m using Unity 2019.1.9f1, by the way. Maybe it was fixed in future releases, but I’m not updating with my current internet, no thanks.
Things I’ve already tried:
Rendering depth into BuiltinRenderTextureType.Depth. It does nothing and instead sends a warning in console about target type 3 not existing.
Not using CommandBuffer.Blit and doing it manually (render a fullscreen quad with a material). Same thing minus the second pass mystery.
Copying depth with a pass in the end. No depth buffer at all.
Setting a render target to a destination texture’s color buffer and source’s depth buffer. As expected, nothing, even though others (including the official documentation) say it works. Might be just a misunderstanding, English isn’t my mother tongue.
Same as 4, but instead setting a render target’s depth to a camera target. Result is the same.
Copying depth in every pass. Same as blitting it, including the second pass mystery.
Could someone help me, please? I feel like I’m going crazy because of this.
P.S.: No, I don’t want to use LWRP/HDRP, I don’t know how to write custom renderers for the former so I can have toon shaders and I don’t know how to access lights in a shader graph so I calculate lighting myself, this is why I decided to make my own RP. If someone tells me how to do that, it would also be very nice and I might move to LWRP.
P.P.S.: I didn’t find any tutorials for post-processing with SRP online and my current system is based on researching LWRP.
OK, I will probably have to rewrite the entire thing. I messed up and for whatever reason the entire image slowly turns white, similar how it does in Source games when you go out-of-bounds if you have HDR on. For now I’ll restrain from doing any post-processing until this question is solved or someone has a proper tutorial on how to do post-processing in SRP.
After doing more research, it turned out I was stupid. ZWrite was set to off in my copy depth shader. Setting it to on did not add a depth buffer in the camera target unfortunately (probably because it doesn’t have a depth buffer to begin with?), but gizmos now work as intended, fading out if occluded and everything. This question is now solved.
Hey! I’m facing with the same bloody issue.
And looks like you’ve made some progress with it. Can you please provide some details about how exactly did you solve it? If I understand correct you copying depth buffer to BuiltinRenderTextureType.CameraTarget some where before Drawing Gizmos?
I’m using Custom Renderer pipeline with MRT (so I create a depth buffer to attach as render target) and the problem is that now gizmos aren’t affected by depth, as the OP mentioned.
I’m assuming that I rather need to copy my existing depth buffer to be used later when gizmos rendered, or I need to hook into one of the BuiltinRenderTextureType. When I’m trying to copy/ renderer whatever to BuiltinRenderTextureType.Depth I’m recieving “target type 3 not existing” error in console.
DepthTexture is preliminarily set as global, so shader can use it. You probably should try Blit(depthTexture, BuiltinRenderTextureType.CameraTarget, copyDepthMaterial) depending on how your shader works.
Unfortunately is doesn’t help. I’ve been trying similar approach, and it doesn’t seems to have any effect, so maybe there is an issue with my CopyDepth shader. Here is it: