I’m wondering if the Unity Team can speak to whether 2018.3+ can utilize the added horsepower of of Dual RTX yet? We have an exhibit install that will finish at 8K, and were hoping to leverage the power of two RTX 2080 ti cards to ensure quality and stable frame-rate at that ultra high resolution.
I know that there was some advances made by Nvidia VRWorks in unlocking the SLI power of the previous generation of cards, but can’t find anything about the current architecture.
Yeah, that’s the only related post I had been able to find, and it links to an SLI Best Practices document “Last updated on 02/15/2011”. I imagine that the new NVLINK system that replaced SLI functions similarly, but has much higher speed bridge, 100GB/s. We’ll be testing dual cards soon, and though I highly doubt we’re the first to try this, we’ll definitely share what we discover best we can.
We love experimenting with Photogrammetry and experimenting with Volumetric capture, so we’d enjoy getting a chance to test that in the next few weeks.
The traditional way Nvidia SLI has been implemented by drivers has been to render every other frame using alternating GPUs. This means you can render at a higher resolution and/or frame rate, but at the cost of an extra frame of latency.
You can’t do that for VR since latency is a primary concern, so you can’t use the default driver behavior. To add to that if you reuse any data from a previous frame you don’t get any benefits at all since each GPU has to wait for the previous to finish before it can start, and that’s becoming increasingly common for games.
That means to support SLI you need to potentially significantly rework your rendering pipeline to support something only a tiny fraction of a percentage of users will have, and even then there’s a good chance it won’t increase performance much.
Unity has elected not to spend resources on this. However Nvidia released their VRWorks plugin for Unity that does most of the work for you, though it won’t work with most post processing plugins, and might break some other shaders, without extra work from you.
This sound great, until reading the Unity plugin reviews - conclusion: dreadful…
Why can’t Unity it’s self target both GPUs to render individual eyes concurrently?
I know nothing about GPU scaling. I just wonder why we need to use SLI in the first place? Why would it not be possible for Unity to just send left and right draw calls to both GPUs in parallel instead of sending the left and right draw calls to a single GPU serially?
Nvidia SLI seems very proprietary, allowing scaling up to any number of GPUs is complicated stuff. But for VR I believe if we were even limited to only two GPUs that would still be many times an improvement over a single; and would cover 99.9% of the market (by my estimates ).
I wonder (purely theoretically) would it be possible to run two instance of the same game on a PC where each instance uses a unique GPU? If so then what limits Unity from just running a single game in a similar way where two 3d engines are synced but offset for the left and right eye?
I really wish Unity could utilize both GPUs natively, without worrying about losing Post Processing effects and/or having to incorporate poorly written plugins into our projects.
Maybe SLI and combining GPUs is the completely wrong approach for VR (in Unity)… may not… but currently I see no good solution while framerates are suffering and hardware is not the main bottleneck.