I am exploring using either Unity3D or Unreal for my application.
I write both in C# and C++ but I like C# far more. I want to use Unity3D for a true 3D application, but I may have found a limitation that prevents further consideration. I am hoping you can help me understand cameras, monitors and windows as it relates to my application.
This true 3D application shows a true 3D scene on a five-foot-wide special screen. You see not stereo but an actual 3D scene. No glasses or anything needed but your eyes.
I am currently using SharpDX which is a C# wrapper library. In this SharpDX application, I use 24 cameras on four windows or six cameras per window. Note I have 4 GPUs, each hosting one window.
Now, from the Unity3D manual under the heading: “Multi-Display”, the manual says:
“You can use multi-display to display up to eight different Camera views of your application on up to eight different monitors at the same time. You can use this for setups such as PC games, arcade game machines, or public display installations.”
Question: is the above 8 cameras per “monitor” or is it 8 cameras overall?
I’m not 100% sure, but I think that’s an 8 monitor limit. The most I’ve done is 4 monitors on a CAVE system. That CAVE system also provided a foundation layer for displaying to an arbitrarily shaped monitor. Displaying that way used 8 cameras, but one of those was going to the Control Screen for the technician to make on the fly adjustments to the experience. Also, when displaying to multiple monitors, there’s a boolean value you need to set in the settings (in code), but I don’t remember what it is off the top of my head.
You can have multiple cameras display to the same monitor. You can give each camera sub-section of the monitor. You can have one camera overwrite the output of another camera (either fully or partially) Also you can have a camera render to a separate buffer (RenderTexture) instead of rendering to a monitor.
I don’t know if there’s a limit to the total number of cameras you can run, but using multiple cameras in Unity can get really resource intensive if you are using them in a non-VR form. VR does things like use the same perspective matrix for both cameras while giving the cameras the same rotation (there’s just a small positional offset). By doing this, it can do the GameObject Culling once and then shares results to both cameras.
Outside of this VR performance trick, Unity will do the GameObject Culling for each Camera and that’s a single-threaded CPU-bound process on the Standard Render Pipeline. If you’re in a scene with a dense number of GameObjects (like a forest), then you are going to need a CPU with extremely good (and fast) single threaded performance. I am speaking from first hand experience on this one, the project I was working on hit a CPU bottleneck when we grew the CAVE System from 3 to 4 monitors. That said, there’s also things like the Looking Glass Unity Plugin that works well on a much less powerful system, while doing the GameObject Culling process a lot more times. The reason why it worked well on the Looking Glass is because their demos had a lot less GameObjects.
The main reason I’m not 100% sure if you can go past 8 cameras is that I don’t know if the Scriptable Render Pipelines like HDRP and URP have any additional restrictions to the max number of cameras.
I am going to look into some of the techniques you’ve provided.
It may be more involved than I thought. There is indeed a lot of computations. In SharpDX, I’ve had to load the model multiple times in parallel processes that are synchronized by shared memory. (I still get 60fps, with all the load)
I may not come back to this for a few weeks. I will let you know how it goes.
I really appreciate your insights.