How to make unity's physics more deterministic??

I’m working on a chaos theory simulation. I have an empty box and I drop 50 cubes in it. These cubes have the same starting positions and rotations each time. When I run the simulation, every time the cubes land in the same spots as expected. I’ve duplicated the system and translated it + 100 units to the right. The empty box is identical. The cubes are all identical. The positions of the cubes relative to its empty box are identical, except the empty box and the cubes are 100 units to the right.

When I run the scene, the duplicated system behaves differently, with the cubes colliding and landing in slightly different places. Even worse, the original system behaves slightly differently too. Upon testing, the presence of a single cubes far away that never collides with anything from the original system ever so slightly changes
each cube in the original system by a thousandth of a unit.

I’ve heard this can be a result of floating point rounding or truncation, but not sure how to fix. I need the system to simulate exactly the same every time despite other objects in the scene if they don’t touch.

Reference photos for an idea of what this looks like;

Any help is appreciated.

Are you referring to the one that interacts with GameObjects? Because you can’t make that one deterministic. You can make Unity Physics (the package) and Havok Physics run deterministically but they involve DOTS.

If you change the collision detection mode on the rigidbodies to Continuous Speculative then visually they’ll at least appear to be deterministic provided you don’t move them too far from the world origin. You could try playing with the physics settings and you may be able to improve things further but because of floating point numbers there will always be some imprecision.

1 Like

The point of the simulation is to be able to visualize the significance of a small change by comparing the original system side by side with a duplicated system but with only a small change. Before this though, the identical system needs to collide identically to the original system without any changes. I did notice that when re-running the scene the systems simulates the same each time. It becomes different when adding new things to the scene. Makes me wonder if there is no physics solution and I should instead look into running multiple instances of the project side by side?

You could place both boxes of cubes at the exact same location but on different physics layers so they don’t interfere with each other and then run your simulation.

I believe it’s also possible to simulate multiple scenes at the same time but I’ve not tried this myself.

You can have a “physics scene” per loaded or created scene and simulate each one any way you want.

To start, when calling SceneManager.LoadScene[Async]/CreateScene, the local physics mode in the LoadSceneParameters/CreateSceneParameters needs to include LocalPhysicsMode.Physics3D.

From the resulting scene, you can get the local physics scene.

The physics scene lets you simulate it and perform queries.

This should make the different instances as “separate” as you can get, so with objects positioned the same way in separate scenes and with the same simulation update steps, I would hope for identical results. Color coding the objects per scene could be helpful.

Different results when objects are added sounds like the kind of nonsense stateful physics backends would do. On that front, you could try using Unity Physics (DOTS physics), which happens to have multi-physics-world support, and see if that helps.

Differences from translating everything might be from numeric inaccuracies in the physics computation. For a side-by-side, instead of actually moving the objects, consider doing something like having the physical representations be separate from the simulated objects, and synchronize the visual object transforms with the simulated objects but with some specified translation.

4 Likes

This is a myth. It’s no more deterministic than Physx or Box2D. Both involve using floating-point arithmetic which has its usual implementation differences across devices. Beyond that, they’re all deterministic.

This has been asked hundreds of times on these forums and unfortunately the answer to can you make it “more deterministic” is no, you cannot.

None are fully deterministic but none do anything random either. Loading order changes and other things which change the initial state should be considered a bug. For instance, if you are getting differences on the same device between playing a scene should be considered a bug.

When you come out of play mode, the physics scenes are destroyed so when you go back into play mode, the set-up order will be identical. The hidden state such as ordering, broadphase results should be the same.

4 Likes

I pulled info from their documentation so if they’re no more deterministic they really should state as much.

I totally agree. No documentation in the DOTS physics should say fully deterministic or that they are more deterministic than anything else in Unity because that’s wrong and if it does then it needs to be changed (I don’t work on it).

The problem is that the word “deterministic” is stated on its own and often doesn’t mean fully deterministic so it ignores the final hurdle of FP handling across devices. Both the native systems and the HPC# system suffer the same FP handling and there’s nothing a C# physics system can do to bypass that. AFAIK the ECS system offers a deterministic guarantee that the order of the components stored (etc) are always the same (even between save/load) but this is no different than the other core systems in Unity such as the order things are loaded in scenes etc.

For a long time, there was the intent to eventually get Burst to deal with the floating-point issue but AFAIK that’s not a feature that’s available yet. If it is then I missed that news!

To note, the same problems suffered here would affect your own C# code not being deterministic; obviously it’s not a magic bullet that any one system can solve unless there was something such as a fixed-point math available and used throughout.

I know a lot of this you’ll know but I’m adding it here to clarify for anyone reading it.

2 Likes

This happens because of the floating point imprecisions vary with the distance to the world’s origin. A 32-bit floating point number always has 6-7 significant digits, regardless of where you locate the decimal dot.

When your scenario is at the origin of coordinates (0, 0, 0), then the floating point numbers are like 1.23456, that is, there are 5 decimal digits after the decimal dot. If you move the scenario 100 units (100, 0, 0), then the floating point numbers in the displaced axis are like 123.456, that is, you lose 2 precision digits. As a result, the numeric imprecisions in your physics simulation accumulate differently and the result differs.

The solution is to run both physics scenes exactly at the same locations, but separate the visual representations. Use two different Physics Scenes with the same exact initial conditions, included the world coordinates. I wouldn’t use collision layers to separate the simulations, as this might still cause inconsistent results (as you observe now in your original scene).

Then, in your scripts, assign each physics object to a visual representation and update them every frame: copy the position and rotation from each physical object to its corresponding visual counterpart. Then, when running two instances, you can apply an offset to the visual objects so both simulations could be observed side by side.

1 Like

A scene will play out VERY slightly differently under certain situations. I’m not 100% sure about the scenarios as I’ve only done limited testing. I’ve found two collision scenarios can happen. Both can happen with no changes whatsoever. I have a basic camera controller player with no collision. If that ‘player’ moves during the simulation, the simulations tends to land on the second scenario.

Back to the 2 system scene, I tried using different collision layers because it was a simple try out. When the systems have identical starting positions but using different collision layers, most of the scene resolves identically but a few cubes bounce differently.

I’ll try using different physics scenes.
I’ll also try using this DOTS or havok everyone seems to be talking about, but I’m not really sure how it would help??

thanks everyone for the awesome feedback and advice.

I’ve not seen that and it shouldn’t be the case if you’re talking about playing a scene, coming out of play mode and playing it again. Note that loading a scene and running it then loading it again won’t necessarily give you the same thing but if you do all that twice it will.

This is a different starting condition and is thus a difference so will give you slightly different results as those tiny differences in floating-point sum up. Quite apt for a simulation that’s sensitive to initial conditions.

Obviously not restricted to physics, the same would happen if you created a simple C# script iterative running a formula with feedback such as the ones used in Chaos. Those tiny differences add up.

For anyone seeing this post in the future, my best solution ended up being a very silly but ultimately perfect workaround.

MIRROR. “Online” locally hosted server and clients on the same device. The cubes are synced initially, but never again. Play the server and client side by side, they simulate identically as desired. Make a minute adjustment on one of the clients, it does not affect the server or other clients. The server can then compare positions, velocity, etc of the same object across clients and server. In this way I can simulate multiple changes to a system concurrently…at the expense of my poor cpu.

Thanks to everyone that contributed. I’m sure someone smarter than I could implement a solution that doesn’t require using mirror but I’m familiar with it and it works.