Sorry this isn’t a specific Unity question. I was wondering when & why would I use multiple cameras with different layers in a scene? I notice in some games the developers used a different layer for the HUD. In 2D games I noticed they assign different background images to different layers and get the game to have parallaxing background. Is there a general rule when I would want to put objects on a separate layer? Also are there any major performance hits (mobile or PC) using all these cameras that are assigned to different layers?
Thanks!
Personally, I’ve had to do this a couple of times…
First was a space scene where I wanted the near clip plane for the spacecraft to be quite small, but the far clip plane (to encompass the planet) had to be huge. Doing both with one camera leads to Z-Buffer fighting (often manifested as flickering shadows). Using one camera for the planet, then a second for the ship avoided the issue. In this case, both cameras were in the same position/orientation.
Next example was to generate a top-down mini-map. By having a layer with map icons that is only visible to the minimap camera, I was able to add icons, etc that didn’t interfere with the player’s view.
Finally, I used a similar technique to emulate Valve’s “3D Skybox” (most visible in Counter-Strike when you’re dead and noclipping. You’ll notice that eg on de_dust the leaves of the palm trees always render behind other objects, no matter the camera position). This is because there’s a smaller (usually 1/16th) model of the environment somewhere else in the scene. As the player moves around the map, a second camera moves at 1/16th of the speed around the “skymap”. This allows middle-distance objects to move correctly with player perspective while avoiding buffer depth issues.