Does Unity optimizes a XR scene internally?

I have created 2 version of a XR scene with Unity for HTC Vive Pro, one with multi-pass rendering and other with single pass rendering. The multi-pass scene renders the scene two times in different render textures, use them as filters and then displays the post-processed version of the scene to the user. While, the single pass scene outputs directly to the display device. When I compare the GPU render time for these two scenes using FCAT VR, I can see that both of them perform the same (even if I increase the complexity of the scene by adding more and more objects). I added almost 50 million vertices to the scene but still both the versions perform the same.

Does Unity apply some optimization internally that makes both of them perform the same? I was expecting to get worse performance with multi-pass scene than the single-pass scene.

1 Like

Using single-pass methods will improve CPU performance by essentially cutting all draw calls in half.
But if your scene is purely GPU bound, performance will be very close to multi-pass.
If you have shadows though: they will be rendered twice with multi-pass with the current code so you should see better perf with single-pass + shadows.

Thanks for your response. Could you please elaborate what do you mean by “But if your scene is purely GPU bound, performance will be very close to multi-pass.” ?
Just to make myself clear, I am talking not talking about the Stereo Rendering Modes when I say multi-pass and single pass rendering. What I mean is having multi-pass shader to render a scene multiple times vs a scene which isn’t post-processed and displayed directly.
I do have custom vertex and fragment shaders for shadow casting and shadow receiving object materials. I have also tried adding multiple spot lights in the scene and calculated the final color in my fragment shader.
Still, both the versions of the scene take similar GPU render time.