I’m learning Unity more thoroughly and I’ve stuck with a problem that I don’t know how to resolve:
I need to render different layers in custom order, so the camera will draw them separatly and they will not interfere with each other. I’ve seen some videos on how to make it with “First Person Shooter clipping guns”, but most of them are outdated (i.e. Camera doesn’t have Background Type - Clear anymore to render layers with separate cameras) and I’m almost sure there is a way to do it with the URP. Example from my current project:
Let’s say there’s a UI canvas with the image in front of the camera, then there’s a cube and a sphere right after it. I need them to stay this way (on the scene), so I can’t rearrange them and can’t move the camera.
They all have different Layers (for this example: Image - UI layer, Cube - Foreground layer, Sphere - Background layer). Is there a way to render them in this order (image below) using URP or something?
P.S. This is my first post here, so I’m sorry if I did something wrong.
P.P.S. English is my second language, so once again sorry if grammar is bad or it’s hard to make a sense out of my sentences.
There are a number of techniques for doing this, most of which the URP makes difficult or impossible. The main issue is rendering of opaque geometry means the depth buffer on the GPU enforces strict sorting based on the actual geometry depth, regardless of the order the objects are rendered in.
With custom shaders, it’s possible to offset objects in 3D space without affecting their on screen appearance. This is how many of the first person weapon setups work. But Shader Graph doesn’t support doing this so it’s not easy to do for the URP.
Another option is to clear the depth buffer between groups of objects. This isn’t really an option to do with Layers since Unity doesn’t use those for sorting internally. It also requires custom shaders and material queue handling that the URP make more difficult to use.
The easiest approach is to use camera stacking, even in non-URP situations. It just wasn’t supported in the URP until very recently.
And I already mentioned the two other methods of doing it above. The first is offsetting the objects’ positions, the second is clearing the depth buffer.
The first would be accomplished by taking the clip space position (the position space the vertex shader outputs) and modify the z to flatten or offset objects towards or away from the camera. This means it’s actual screen depth is being modified, but not its appearance otherwise. You can do this manually just by moving objects further from the camera and scaling them up, though this is a little easier to work with.
The second could be done with special shaders that, like the first method, modify their clip space z so they’re at the far plane and always write to the depth buffer even if other stuff is closer. This means using material queues rather than layers to keep things rendering in the correct order. Or you could use the commandBuffer.ClearRenderTarget() function, if you also render all your objects manually with command buffers so you control the draw order yourself.