Will NVIDIA PhysX give the same result across devices/computers?

I’m working on a network multiplayer physics game using Unity3.2 the built-in NVIDIA PhysX engine. It could greatly simplify my network communication code if I knew how PhysX worked across multiple computers (i.e. from one Mac to another or from an iPhone to a Mac).

Has anybody done any testing across multiple computers? If I start with a given system give that system a set of inputs, will the answers on multiple computers be exactly the same?

Thanks,

It won’t be exactly the same, no.
Numeric simulation like this is and never will be equal over different cpus, cpu models etc.
You will have have to do local and server simulation and apply correction if it differs too much.

With iOS in the mix it should get a bit easier cause your amount of physics usage will be significantly unless its ipad exclusive.

Ok, thanks. I’m not going to run a server - it’ll be peer-to-peer connections only. I guess I’ll just pass the data between the instances of the game to apply corrections.

So you aren’t going to use unity networking then?
In that case be aware that you require iOS Pro to proceed as System.Net is an iOS pro only feature.

That’s correct. I am not using the built-in Unity networking. I do have Unity 3 Pro and iOS Pro.

Thanks!

oki dok
Just wanted to make sure you know about it :slight_smile:

Wish you good luck :slight_smile:

Great, thanks for the help!

@dreamora - Just curious…did you do any tests related to this (i.e. comparing actual outputs based on a system inputs), or is this based on something else?

Experience from past online projects I was part of that were just on desktop and that were actually even all on different types of Core i cpus.
When adding other cpu manufacturers or even completely different cpu in, it will only get worse.

But its kind of to expect as simulation isn’t deterministic and can’t be especially not as long as we rely on float and the cpu as neither of them is deterministic and stable enough in the calculation within the IEEE specs at all.

The only kind of hw where you get more or less deterministic results at the time is when you run it purely on the gpu, something thats out of question as Unity does not support SM4+.

With DX9 and SM3 as upper end, its not that likely to happen as SM4+ are the gpus that are capable of doing reasonable GPGPU programming. As a prominent example that shows this I would like to point to crysis, which for DX9 users doesn’t offer physics simulation in MP, but does for DX10+ users in MP

Oh, ok. I don’t quite buy that a GPU is required. It’s certainly possible to have a determinant physics engine on any CPU. Math is math - even if its on a CPU. Float rounding is one area that can cause different answers. There are plenty other areas of potential run time errors (or differences). However, I’ve personally build physics routines that do give the exact same answer of a physics system across multiple platforms.

I guess I’m curious if anyone has done any actual testing on the NVIDIA PhysX as implemented in Unity3d 3.x to see how physics solutions differ between different computers at run time. If not, no biggie - I’ll run test cases for my specific use case.

Thanks,

It can be deterministic right.
But the tradeoff is that its either no longer realtime usable in interactive environments (doubles) or that it is not fullscale simulation but a simplified form where you can work with fixed point precision where the error won’t explode as it will with the chained and dependent linear equation systems a physical system simulation brings with it. There is numerically not much to do, even if you would go with QR seperations to solve the systems, the number of dependent ones is still large enough to just kill the limited precision of float especially as different cpus have different “missbehaviours” below 10^-6 or so, basing on the normalized float form, which is still pretty troublesome, cause a few multiplications with other numbers with errors in this range and you approach errors in the 10^-2 to 10^0 range which will lead to visible deltas, as will any error thats larger than 10^-5.
Its like into the other direction: with world sizes larger than 50k-100k units into the + or - direction, you will find at least 1 user that enjoys an all jittering world cause rotations start to jump forward and backward due to the limited precision

As for deterministic test: I haven’t tested it again with U3, the last project where I had to “enjoy” this was Unity 2.6.1 based, so you could be lucky that the order in which the errors now are are smaller.

But even then: if you do realtime networking it won’t help much. If you do turn based for example a cannon fight game and alike, it might with some luck work out.
For realtime action its a much larger problems because you:

  1. Don’t have functionality to drive simulation forward or backward which
  2. Forces you to “approximate” a current state of a user basing on a past state and past input which has good chances to be not fully accurate anymore (as you can’t make input → response handling 200ms+ set back just for the sake of networking as it feels non response … and at 100ms or even lower, latency will break your neck)

Networked physics in general is no trivial topic, I’ve not heard of many projects that do physics simulation and do it networked.
FrostBite 2 might be the only one I’ve seen and even there, it uses primarily physics simulation for things that have descrete fireoff points (explosions and triggered by them destructions) and frostbite runs its physics completely on the GPU I think