HDRP Camera Stacking Now Supported?

Camera Stacking was supposedly removed from HDRP, but according to the HDRP 6.9 docs:

https://docs.unity3d.com/Packages/com.unity.render-pipelines.high-definition@6.9/manual/HDRP-Camera.html

Quote from that page:
“Cameras capture and display your world to the user. Customize and manipulate your Cameras to present your Unity Project however you like. You can use an unlimited number of Cameras in a Scene and set them to render in any order, at any position on the screen.”

Edit (for clarity): I tested this in 2019.2 with HDRP 6.9.1. Camera stacking works with multiple cameras, but has a high performance cost of anywhere from 1-2ms per camera on both the CPU and GPU.

Post processing seems to work on the other cameras only when the highest depth camera has it. This means we currently cannot have a UI in World Space or Screen Space - Camera that is unaffected by post processing.

Is Camera Stacking now a supported part of HDRP? Will it stay or be removed?

5 Likes

There’s been a lot of confusion about camera stacking in HDRP. Can we get an official word on this @Remy_Unity @SebLagarde ?

Especially being able to choose which cameras get affected by post processing.

4 Likes

LWRP as well.

I am also interested in this.

If it’s not supported in HDRP, can someone suggest another way specify objects which are/aren’t affected by a post-processing volume?

Hi,

The documentation is correct. HDRP support many camera (that you can use to do render to texture), and it doesn’t say that it support camera stacking.

so official answer:
HDRP support multicamrea (i.e split screen)
HDRP don’t support camera stacking

We have however patched HDRP in 7.2.0 so it could support the stacking of camera within a set of constrain (i.e we manage correctly the clear of depth / color. We are working on a prototype to allow to compose multiple camera or stack them. There is no ETA for this tool but it mean some users could come with custom script for it.
A big warning: the cost of camera stacking is very heavy (on CPU), and it is not recommended for game context. Prefer the usage of custom pass / custom post process.
Also in HDRP if you want to draw UI after postprocess, there is a RenderPass mode on Unlit.shader and shader graph to do exactly that, it is name: AfterPostprocess (without the need of using a second camera)

hope that help.

5 Likes

@SebLagarde

Hi. When you say it’s very heavy on the CPU, you mean that rendering UI elements on an overlay canvas or with a custom pass / custom post-fx is significantly more efficient than rendering that UI on top of the scene with a different camera that renders only the UI?

I’m asking because my project setup is so that I render the UI using a UI-only camera, so that the UI lives in world-space and shows up in VR. What’s the correct alternative with a single camera in the case you want your UI to show up in VR as well?

1 Like

I have exactly the same issue with the extension that Space UI s interactive Worlda Space UI s are rendered to texture and applied to the car screens. They are up to 8 interactive screens in the car.
So they din t need TAA because of smearing. This is working with the AfterPostprocessm described above.
But they need Tonemapping and DOF because when car interior has DOF the screens need it too.

For us this is the last showstopper to finally switch to HDRP.

@keeponshading Just found out how powerful the custom pass / full-screen pass features are.I recommend taking a look at the following link, as it’s a Unity project that exercises these features and manages to get some pretty amazing full-screen and per-object effects. You’ll need Unity 2020.1.0a24 at least - I have 2020.1.0a25 and works fine.

https://github.com/alelievr/HDRP-Custom-Passes

Now speaking strictly about your car screens, you can render each screen separately with a different camera on render textures (make sure that the RT has alpha) and push these RTs in a Lit material as Base Map and / or as Emissive Color. Set the in the material Surface Type to Transparent, Rendering Pass to Default, Blending Mode to Alpha, check Transparent Depth Prepass, Postpass, Transparent Writes Motion Vectors and Depth Write. Set Depth Test to LessEqual. Also check Receive SSR if the Material Type is set to something that looks glossy (Standard, Iridescence, Translucent etc).

Additionally, crank up the Smoothness and / or Coat Mask to make the surface glossy. Since we’re talking about glass, you can enable refraction by setting the Refraction Model to Box or Thin. Of course, put that material(s) on quads, so they show up in the scene and render as geometry.

5571685--574930--HDRP-Transparency-Lit-Material-Settings.jpg

By doing this, you’ll effectively get a glossy refractive surface that although looks transparent it writes into the depth buffer, hence being subject to all post-processing effects that use the depth buffer, Depth-Of-Field being one of these. Also, because you’re using the Lit shader, it will also respond well to the lighting in your scene (reflections included).

Check the 2 uploaded photos for a reference of what you could get. The main difference between the two is that in the second one I’ve used alpha clipping.

Why using alpha clipping? Since your transparent surface now writes into the depth buffer, everything that’s behind it will not matter to the DOF (you can see that in the focus areas, the background is also in focus, which is wrong).

5571685--574924--HDRP-Transparency-DOF.jpg

In the second capture, alpha clipping is enabled, so that the shader writes into the depth buffer only the information that has an alpha grater than some value (0.22 in my case). Doing so, you’ll get correct DOF on your UI and also on whatever is in the background.

5571685--574927--HDRP-Transparency-DOF-AlphaClipping.jpg

All the best.

5 Likes

Wow. Thanks a lot for the detailed explanation.
I will try it it tomorrow.

Fyi.
In BuiltIn RP is uesd an extended AnyUI.

You’re welcome. I’m already full HDRP with my current project, but perhaps for future projects I’ll consider that component. It does looks really useful, thanks.

FYI camera stacking is a bad concept anyway, not one that most commercial games use. They favour drawing to buffers and can draw to another cam or combine, things like that.

When you have multiple cameras with stacking in Unity, unity does all the work all over again per camera (and always has) from sorting lights to culling, it’s a huge amount of waste render and cpu time (and always has been).

It’s better to ask from this point how to achieve your ambitions without an extra camera (you will find all things are possible, just different and much faster to execute)

2 Likes

Yea, there’s actually an official replacement for this coming called Compositor. It’s not camera stacking but it will do the same things and more but efficiently. There are currently three ways to do things with it and each has its pros and cons when it comes to performance. You can find all the info here.

https://github.com/Unity-Technologies/ScriptableRenderPipeline/blob/HDRP/compositor/com.unity.render-pipelines.high-definition/Documentation~/Compositor-Main.md

1 Like

I haven’t checked but I’m hoping that URP and HDRP gain parity with that sort of thing so I can port easily if needs be.

1 Like

Is there an alternative way to render FPS objects with a different Fov than the Camera does, without Camera stacking?

URP can achive this with custom pass, since there is an override for that.

How about HDRP?

1 Like

Camera stacking is available in URP, https://docs.unity3d.com/Packages/com.unity.render-pipelines.universal@7.2/manual/camera-stacking.html

If you’re referring to making FPS weapons look like they are rendered with a different field of view, then a shader with a custom projection matrix distorting vertices can accomplish exact same effect without a need for an additional camera.

2 Likes

I’ll give that a try.
Sounds very hacky in my opinion.

Is it possible to just override the default projection Matrix or should I rewrite the lit shader?

Game dev is very hacky :smile: You can also try to write an area of depth to reserve it for the FPS weapon if that’s possible.

Multiple cameras are a game dev evil if they are doing all the same work over again.

3 Likes

I Agree.
But in URP there is the possibility to override the Camera for custom Passes.
Looks like it just overrides the Projection Matrix for rendering objects in this case.
With no extra work for the Pipeline. Just integrated!

Such a solution in HDRP, however, is completely missing.
I guess this is another half baked thing that comes with HDRP. :smile:

Is there a current guide/workflow for a replacement technique to camera stacking in HDRP?

3 Likes