HDRP + VR + Asymmetric + Graphics Compositor

Project is HDRP, 2020.3.5f1, using a VRIF rig, and deploying to Quest 2.

There is a VR player and an instructor. The VR player interacts with multiple pieces of equipment (binoculars, range finders, scopes, etc.) that currently use render textures from their own camera. The instructor views what the VR player is doing in 3rd person and also has views of some extra cameras rendering to texture. Obviously, this runs like poop because of all the HDRP cameras.

I’m currently trying to discern if the graphics compositor, which has seemingly limited tutorials on, can accomplish the camera stacking we are performing, asymmetrically, with dynamic cameras (since the equipment is on, off, or not in the scene at given times).

This idea is primarily to reduce the current load of having up to 5-10+ HDRP cameras eating up so many resources at once. Is this the way to go or should we be wholeheartedly redesigning how the equipment displays things from afar?

So far, I’ve managed to get the compositor in there but there is a pretty terrible disagreement between the displays. If I have my proper instructor view on display 1, the VR player view on display 2 won’t even render although it continues to track. If I reverse it, the instructor view is just the VR view, which isn’t helpful.

Kill me. All I had to do was clear the color from my instructor camera.

I see… I hope you don’t mean a native Quest 2 app with HDRP, that doesn’t sound like it would run at all.

No, it’ll be running via link.