Multiple Scene Post Processing interfering with each other

Hi, we have multiple additive scenes that each contain their own Post Processing Volume.
Now when an overlay scene is loaded on top of the base scene which has bloom enabled, the overlay scene is also rendered with bloom. This can not be the case.

What is the best way to make each volume setting exclusive for the scene it’s in? We can not tell which combinations of scenes can be loaded on top, so “hardcoding” a layer mask won’t be satisfying.

Any tips appreciated :wink:

To be clear, the scene in additive are at the same place (like room and objects) or it’s differents rooms ? In the second case i think you can use collider to enable a profile for postprocessing, otherwise yeaah, it can be complicated.

I think you cannot do the first case using different profile at the same place. Maybe with different forward renderer and camera stacking with layers etc it’s possible, but it will be complicated as hell to not do something buggy that not kill your performances.
In my case, i have succedeed to desactive postprocessing on a particular layer in URP, but to choose wich profile is in which camera euh… I think it’s not available yet. I can get wrong on this.

Hm, not sure what you mean with the scenes are in the same place. They are coexisting in the hierarchy at runtime:
6191391--678936--Bildschirmfoto 2020-08-11 um 13.57.20.png

If nothing helps we would need to setup a layer for every scene and set the layer on the scene cameras to only use the Post Processing for that layer. But I bet my team won’t be accepting this x)

Hummm so one scene is not a “room”, they are superposed in space i think. I speak about absolute position when i say in the same place. But if it’s only activate the post process on one layer (or the inverse), you only need one layer, that’s my solution for that :

I use it for some element to not be touched by post process (UI World space) with a NoPostProcessLayer.

But what i don’t understand is why mixing multiple postprocess profile on the same location, position ?It’s nonsense.
From what i read @Noblauch i think it’s lobby that is problematic (i suppose a lobby or a room for multiplayer hall?), but it’s easy to teleport player from one location with a post process profile to one another and make collider volume like this :
(it’s at the good sequence)

I really don’t understand what types of objects tou need to apply differents postprocess, i mean, post process is apply to the image camera, so all objects will be affect in the same way.

Hey, thanks for the reply again!
The app is 100% ui. The post processing is used for glow, anti aliasing, and some effects in the future. The additive scenes are overlays, each with their own camera. If I have a “you won XY” overlay with a ton of glow. I don’t want the lobby / main menu in the background to glow the living “s” out of the player xP

I hope you understand the setting better now :s ^^

Aaaaaaah i understand more why you were confused about location and post process. But in that case, if there is no correlation beetween what render the camera in 3D/2D space because all the game are Ui,
(wait is it even possible without a single sprite in scene ?? That’s an other question)
so you can do the trick to separate the camera location, make a volume with differents postprocess profile with collider for each camera location, and because i think your canvas render is screenspace camera, you select the camera in the scene, and tadda !! There is no need to split in different layers.

The trick is you don’t care about what render the differents cameras because all is canvas am i right (no SpriteRenderer or MeshRenderer)?

Hm the cameras are always on 0,0,0 for every scene. We have basically a template scene that every developer can copy and build new UI scenes. When loaded on top of each other the cameras would touch all volumes. And it’s hard to know where maybe already a volume exists.

Say one dev says ok, I put mine on 10,0,0 and another one has the same idea, this would collide and is hard to debug.

I don’t quite understand what you mean with “you select the camera in the scene”. As far as I know you can’t select cameras for PP?

And yes, UI Images for examples are just sprites at the end of the day and PP works :wink:

You can select camera for canvas screespace camera. Postprocess volume set for cameras in his collider the good post process. And make your own tools to not collide with other global volume is easy, an editor script or someting like that, and since it’s about colliders it’s just a workflow to keep.
And also UI =/= sprite renderer, it’s abolutely not the same, one is UI, the other is a 2D Graphic object, so be clear what do you mean by sprite ?

And it’s absolutely not hard to debug, neither to keep if you do the right tools.

yep, did that.

sorry?

Yes, creating tools is not a problem, but as it sounds you have a solution for keeping global volumes, which I think is not possible? In order to work per camera, the settings needs to be local and controlled via collider, doesn’t it?

We could maybe create collision layers via script depending on the scene name, but not sure if this would be a good approach.

I haven’t said to keep global volume, it’s impossible. So yeah it’s local settings with collider. The way you do the location of the different camera depend on your project. You can use for that Unity - Scripting API: Physics.ComputePenetration, but i think a editor script that reference and place the scenes objects could be more efficient.
Good luck for the implementation !

1 Like