I’m working on a project where a requirement is to render a single camera view to two displays, with a different GUI for each display. I’m trying to find the most performant way to achieve this. Any insight would be great, I’ve made some progress but not as much as I wanted. Any performance I can gain I can easily use towards improving the visuals.
The easiest solution I’ve had was to duplicate the camera and direct each one to the corresponding display. This works well but absolutely destroys the framerate, no surprise.
The other solution I’ve tried is to let a single camera render the scene while copying its rendertexture/activateTexture into a raw image visible on the second camera/display. This worked but felt like a bit of a hack and didn’t work properly in the editor (It worked fine in builds which is where it needs to work).
I feel like there is a lot of room for improvement but I haven’t figured it out. Maybe it is simple and I’m just being obtuse. Does anyone know a better way to achieve a single camera view on multiple displays with different UI for each display?
It might be easier to have three cameras. Have your one camera that renders the scene. Have this render to a target render texture. You can set this via script on awake, or assign a render texture asset to the camera’s target. The other two cameras just display the UI, and have a command buffer that blits the main camera’s texture to these UI cameras during one of the events prior to drawing the UI. Maybe CameraEvent.BeforeForwardAlpha if all your UI is transparent? You can also get a small extra perf win here by setting the UI cameras’ Clear Flags to Don’t Clear. You’re effectively clearing them by blitting the main camera’s render texture. If your UI uses the depth buffer (ie: it has opaque objects), maybe use Depth Only.
Thank you for your help I had much better results following your advice.
My scene had changed since I last took some performance numbers so these aren’t comparable to what I wrote in the OP, but this is what I saw testing it out this afternoon.
I’m pretty happy with the result and it works just fine in the editor. This is the first time I’ve tried using command buffers and related APIs so if you don’t mind let me know if I made any glaring mistakes in my proof of concept code. For reasons I don’t understand I didn’t have success with CameraEvent.BeforeForwardAlpha but CameraEvent.AfterEverything did the job.
public Camera camera1;
public Camera camera2;
private CommandBuffer buffer1;
private CommandBuffer buffer2;
void Start()
{
buffer1 = new CommandBuffer();
buffer1.Blit(mainTexture, camera1.activeTexture);
camera1.AddCommandBuffer(CameraEvent.AfterEverything, buffer1);
buffer2 = new CommandBuffer();
buffer2.Blit(mainTexture, camera1.activeTexture);
camera2.AddCommandBuffer(CameraEvent.AfterEverything, buffer2);
if (Application.platform == RuntimePlatform.WindowsPlayer || Application.platform == RuntimePlatform.OSXPlayer)
{
if (Display.displays.Length > 1)
Display.displays[1].Activate();
else
Screen.fullScreen = true;
}
}
You’re using camera1.activeTexture for both Blit() calls. You should be using BuiltinRenderTextureType.CurrentActive or CurrentTarget. I also don’t know when mainTexture is being rendered to, but presumably it’s on the main camera, which also has the lowest Depth value to ensure it renders first.
One minor thing that won’t have any impact on performance. You can use the same command buffer on both cameras, especially once you make the change noted above (though it might already work since you’re already assigning the same target to both and it’s still working).
Command buffers are lists of commands. In this case the command is general enough that you don’t need a unique one for each camera. Just “render a texture into the current active render target when this command runs”.
I forgot to include the declaration of mainTexture in the snippet, my bad. It is just a renderTexture that the main camera is rendering to. I noticed my mistake with using camera1.activeTexture on both calls last night, woops. It worked anyway, but using BuiltinRenderTextureType.CurrentActive makes way more sense, thanks.
Again big thanks for your help. The solution was very simple but wasn’t obvious at all to me. I’ve mostly found Unity documentation to be very easy to understand but I was lost on this subject. I’ll go ahead and paste the corrected code in case it helps someone in the future.
using UnityEngine;
using UnityEngine.Rendering;
public class CameraCopier : MonoBehaviour
{
public RenderTexture mainTexture;
public Camera camera1;
public Camera camera2;
void Start()
{
var buffer = new CommandBuffer();
buffer.Blit(mainTexture, BuiltinRenderTextureType.CurrentActive);
camera1.AddCommandBuffer(CameraEvent.AfterEverything, buffer);
camera2.AddCommandBuffer(CameraEvent.AfterEverything, buffer);
// Activate displays in builds or set fullscreen if not using multiple displays
if (Application.platform == RuntimePlatform.WindowsPlayer || Application.platform == RuntimePlatform.OSXPlayer)
{
if (Display.displays.Length > 1)
Display.displays[1].Activate();
else
Screen.fullScreen = true;
}
}
}
Configure the Rendertexture with the desired render resolution (you can use multiple Rendertextures for various resolutions or a Rendertexture with dynamic resolution but need a dedicated script to set this up in any case)
Create a Shadergraph with your Rendertexture as Input, connect it to a Texture2D Sampler and from the Sampler RGBA to Fragment Base Color. Save the Shader.
Configure your Main Camera with your Rendertexture as Output. Use your normal URP Renderer for the Main Camera and setup you Culling Mask to only render what you need (e.g. exclude GUI) on both output Displays. The Main Camera needs to have a lower priority (e.g. -1) than the Display Cameras.
Configure your Display Cameras to use your new second Renderer and a higher Priority (e.g. 0). Set the Culling Mask to only show what you need on this display (e.g. GUI layer).
Setup the new second Renderer only with the Blit Render Feature. In the Feature select Before Render as Event and use your Material with the Shadergraph in Blit Material.
This way you will render your scene only once for multiple displays and can even use layers / culling masks on your Display Cameras to add to the basic view.
One thing to keep in mind is that the resolution of the Rendertexture may differ from the display resolution. This has some (undesired) side effects: If the aspect ratio is different the image may be stretched on one axis. If the resolution of the rendertexture is too low the output may be blurry. In any case Input.mousePosition (e.g. for Raycasts) X & Y axis will have to be adjusted with the factor the Rendertexture and Display resolutions differ to work properly.