Benefit of multiple gpus?

Hi

I hope someone around here has a little more knowledge on this area than me. I can only find old threads on this topic.

Will Unity benefit from multiple gpus (same cards)? We have a CAVE environment using nVidia Mosiac for the screen setup and we are thinking about upgrading the projectors with new laser projectors running full HD. So the final resoultion will be something like 7680 x 1080 pixels. So we are looking into upgrading the graphics card as well.

Based on the benchmarks it will make most sense ($$$ wise) to buy 2 nvidia RTX A4000 card rather than 1 RTX A5000 card - if Unity can support the use of more than one card?

I’m also thinking a bit into the future. We often have interior projects that we wish to show, so it could be cool if I could set it up using the new Ray Tracing features for lighting/shadows/refections and so on. So if Unity could benefit from more than one card, it would make sense to buy two A4000 cards.

TL: DR;

1 - Can Unity use multiple GPUs, so I will get a higher FPS performance
2 - Can HDRP use multiple GPUs for RayTracing calculations?

1 Like

Can Unity use multiple GPUs: Maybe … but…
Will you get a higher FPS using multiple GPUs: No.
Can HDRP use multiple GPUs for RayTracing: No.

Multi-GPU rendering is mostly dead at this point for the consumer space. The main problem is post processing and TAA. Most of these techniques rely on having copies of the entire rendered frame which means splitting up a single frame across two GPUs hurts performance, or the previous entire rendered frame which means alternating rendering between two GPUs hurts performance. For real time rendering, as fast as modern NVLink is, transfering data still a major bottleneck for rendering. For offline non-realtime rendering it’s fine because a few extra ms isn’t going to be an issue when each frame takes seconds or minutes to complete.

When rendering using DX11, you get multi-GPU support “for free”. As best I understand it the graphics drivers decide how to split up work for the program and it’ll “just make it work”, but if you’re using almost any modern rendering features it ends up being faster to just use one GPU, so the drivers will just do that making a lot of people upset that X game or Y engine isn’t working. There’s not really much the game or engine can do since it’s not up to the game or engine to use multiple GPUs.

There are of course ways to force multi-GPU rendering, but again, it’ll usually not actually have any benefit, or may even slow things down.

For the HDRP RayTracing, that’s a DX12 feature. Supporting multi-GPUs in DX12 requires manually writing code to support it and AFAIK Unity chose not to. In part because like the above cases it’s unlikely to actually be faster. This is low level engine stuff that’s not exposed to c#, so AFAIK there’s no way for an SRP to do this. Perhaps insanely the “easiest” way to make this work might be to run two completely separate instances of the game with each one set to explicitly use only one GPU, and have them communicate with each other like a multiplayer game would. You can force Unity to use a specific GPU using a command line startup with -gpu # where you replace # with the gpu index you want.

TLDR: The benefits of multi-GPU rendering for consumers basically ceased to exist around 10 years ago, so very few game engines make active efforts to support it.

5 Likes

Thank you so much for a great and in-depth answer bgolus.

It looks like there won’t be any benefit of buying 2 gpus instead of one.

I looked into suppliers of supporting the CAVE package and it looks lige MiddleVR, who just partnered with Unity not so long ago, has some support for this. They use some kind of cluster support and then syncs the frames if you use more machines. But this requires some more in-depth knowledge and there are some limitations to various scripts. So when I’m more of a plug’n’play user then this solutions might not be for me :slight_smile:

I guess the easy version is just to buy a graphics card now and then upgrade to a faster version every two years or so.

1 - probably yes (but not tested by myself) : here is an example of Unity engine (DX 11) based game with great NVIDIA SLI (multi gpu) scaling (higher FPS) :

2 - probably no ; but if you have a lot of time, you can write own DX12 native plugin for Unity with SLI support (you can find examples how to write these plugins in Internet here and there, or I can give a links).

I am also looking into this for a CAVE style environment.

I’m speaking with the middleVR folks and have a trial so wrapping my head around that.

Of course rather than scale out (i.e. have a cluster) i am trying to work out where that limit is in scaling up (single machine with GPU)

In your use case that’s an 8K worth and i would have thought a single machine could do that, but this is really what i am looking at as well.

However my question to anyone reading this is whether NVLink will work in Unity - so the idea of linking two A6000 cards together to present as a single card - in theory would be the best you could throw at this but i really don’t know whether Unity will see this as a single GPU card…

Has anyone experimented with NVLink?

Kindest
Paul

1 Like

@paulinventome , have you found any solution of your multiGPU rendering?