To summarize, I’m trying to replicate what’s described in this short 1:30 video:
Firstly I’d like to clarify that this is NOT about URP’s “Camera Stacking” feature. This is about the Built-in Render Pipeline, and manually stacking cameras that target different layers in order to separate post-processing effects.
Basically, I want to apply certain post-processing effects on specific layers of GameObjects rather than on everything in the Main Camera. I’ll refer to my two cameras as ‘Main Camera’ and ‘Post-Camera’
However, unlike what’s seen in the video, my Post-Camera causes everything rendered from the Main Camera to be cleared. The Post-Camera has clear flags set to “Don’t Clear”, and a depth higher than the Main Camera’s, so I’m lost as to why this is happening. I’ve tried various solutions like disabling HDR/MSAA, playing around with the Rendering Path, basically every core Camera option.
For this test project, I have some ‘light’ bloom on the Main Camera, and some very heavy bloom on the Post-Camera to easily confirm that the post-processing is working separately on both cameras/both GameObjects. Well, the two objects do have different post-processing applied to them, but the Post-Camera is clearing everything from the Main Camera despite being set to “Don’t Clear”. Here’s a quick recording of my test project which showcases the camera’s configurations and the odd result:
The intended/desired result is for the skybox/solid color background and the left sphere to still be visible. Does anyone understand why the ‘Don’t clear’ camera flag is in fact, clearing everything?
My actual use case has nothing to do with bloom, this is just a simple test project, but it’s giving me this same issue in my real project.
The project is now in the latest LTS release, Unity 2022.3.1f1 (and I still had the issue before updating, on Unity 2021.3.8f1). I’m open to alternative solutions rather than solving this issue if you guys know another good way to separate post-process effects between layers.