I was curious if there’s a limit to the size of physics/particle simulations you can run in Unity, using compute shaders / HLSL, and if it is possible to distribute the calculations over multiple GPU’s … or even in the most extreme case onto a supercomputer?
Would tend to say it’s not designed for that, so most of its normal advantages will be lost if you somehow manage to force it…
Super Computers tend to still run “bare metal”. There’s no OS on that, because that would waste computations, so no support for a software like the Unity player.
… and your last paragraph answers somewhat my question, in the sense that Unity isn’t a completely closed off environment where one has to stay within the program.
BTW I might be wrong and/or maybe insult some people, but the scientific programs that make use of supercomputers are often themselves not so special or specifically used for a big cluster of computers. It’s the team that runs these computer that often help manage the adaptation.
That does make sense, but I assume they request the source code for whatever the scientists wanna run, don’t they? Since Unity isn’t open source, that may be troublesome…
Also supercomputer and clusters are not quite the same. When you you say supercomputer I’m thinking of the stuff that’s on the Exaflop-ranked Top500 list etc. Those systems which consists of 100s of CPUs and GPUs connected with a special interface.
A Cluster is more like a set of servers which are normal computers and often even virtualized. On those you then run separate software which just exchanges data over a network. That of course is a viable usecase for Unity and they offer that to professional users e.g. for simulation: https://unity.com/products/unity-simulation-pro
EDIT: Interesting, they mention “Multi-GPU distributed rendering to share the workload” there. So guess they have some solutions, but it’s not something you find in the public API unfortunately. Not too surprising since “Simulation Pro” users likely have bought source code access.
Yes that’s the kind of option/service that I was hoping for, where we can develop a physics simulator in Unity and if it works well and we want to scale things up big time, you don’t need to rebuild it.
Might be pricey though, anyway I guess they do the same work as the team that I mentioned previously that run supercomputers, it’s a specific skillset.
Mh, I started doubting that Supercomputers are clusters, but these days they are, in the past it might have been different. Supercomputers mainly work like a cloud service, people using them can get access to a section of the system to run their projects on.
In that sense I guess the use (and need for source code) can differ a lot. At universities where HPC’s are often located there are lots of different service option, going from students that need a more powerful computer than their laptop to programs that are specifically designed to make use of them.
A scientific program could be made from two parts - the number crunching part and visualizing part.
The number crunching part could be distributed, written using OpenMPI or something like t hat.
The visualizer, however , can be anything, as long as it can talk to the number crunching part via some sort of interprocess communication. It also does not need to be distributed.
Unity can be used to write the visualization part.
The documentation does not speak about distributing over multiple GPUs. Like I said, most likely scenario is that it is speaking about parallel in sense that it is using many GPU threads, but there’s only one GPU device.
In fact I do not recall unity having an ability to utilize multiple GPUs in the first place.