I wanted to see if others here have utilized Unity in professional setups where the amount of screens / beamers that had to be fed, forced them to use multiple GPU. This is on Windows and DX9 / DX11
We are currently in such a situation where this will be required and I’m now attempting to find the right approach to solve the issue.
CrossFire / SLI in this situation is obviously no option as it disables the output of the additional cards.
There are normally 2 options:
I would choose which graphics adapter to use upon start and the engine then just uses that - Unity up to 4.1 at least is not capable of that, its auto enforcing primary. (makes little sense. Dark Basic and Blitz Basic were smarter than that more than a decade ago)
the engine would support multiple seperate DX contexts with the possibility of binding the camera to a context. As far as I see not even the new Display class, once supported on Desktop, will be capable of it. Its only targeted at multiple contexts on the same graphics adapter.
Has anyone solved this problem in the past using Unity 4+?
I get the feeling that I am required to use the bruteforce fallback which is to throw in 2 or 3 distinct workstations that handle the screens and get them sync up correctly with all the realtime userinput terminals that are going to drive the interactive experience. That would obviously not be optimal and somewhat a shame as graphics adapter support existed for ages in DirectX.
Note: What we are currently waiting to have answered is if ImmersaView Warp, which we will need to utilize anyway, is able to combine distinct gpus into a single virtual device so Unity can utilize them.
I’m currently working on the same kind of project as you are and I’m really interested in your feedback:
Did you manage to use Unity with a multiple GPU setup (with or without SLI) ? Do you have a single Unit app running on your workstation or one app per GPU? Did you find out how to tell Unity which GPU it should use?
Are you actually using Immersaview? If yes, does it work with your multiple GPU workstation ?
I am currently carrying out R&D into how appropriate Unity is for use on an inhouse simulator tool.
This tool will ideally be driven by Unity C#, feeding 9 Projectors. Rendering power isn’t an issue for this, as we already have the physical set up and hardware.
At the moment, we use a different Games Engine per Simulator. I am hoping I can bring these into line by using Unity across the board.
With regards to issues I can see arising, I need to implement ‘Mersive - SOL’ as well as GPU Affinity with regards to rendering twice to each Projector for purposes of 3D. Mersive would handle the Warping and Blending ideally, however we could use nvidias solution.
Have you guys had any luck with regards to Unity in this environment.
All, Have a look at MiddleVR you can download the free version to test all features for 30 days.
I have been using this for the last 2 years to run a 5 screen CAVE system @ 120Hz 1024x768 per screen.
It is a little pricy, but it works well!
Regards
Max
Did you succeed to manage 1 unity instance rendering through multiple viewports (each viewport is driven by its own GPU), on a single computer with many graphics cards ?
If yes, I will be very interested to hear about it ?
I currently manage my CAVE system using one workstation per projector, so for each project I have to deal with network programming, synchronization stuff and so on…