What is the reason for implementing unity physics on a fixed time basis?

I am seeking to gain some understanding of the design decisions behind basing unity physics on a fixed time system.

I believe it’s mostly to make the simulation frame rate independent and more efficient by running it at a lower frame rate. You also have to consider that a simulation is often run on a server for multiplayer games and the clients with high refresh rate monitors will interpolate the frames sent by the server to keep things moving smoothly.

If you change the physics frame rate (Fixed Timestep) then the simulation will behave differently - a rigidbody may not be able to jump as high or stop as quick. So if you were to update your physics in sync with your monitor’s refresh rate and then later changed the refresh rate the simulation would behave differently. In competitive offline games players could have an advantage if they were allowed to determine the physics frame rate.

BTW - the fixed time system isn’t a Unity thing. All game engines since id software’s Quake have used this method.

Thanks @zulo3d for your insight and information.
I can imagine that unity may have chosen to use a lower frame rate for performance reasons.

Are you sure that physics will behave differently if the fixed timestep is changed? Mathematically, motion is proportional to the time delta.

Do a little test with different physics frame rates to see how high you can make a rigidbody jump. The higher the frame rate the higher the rigidbody will reach.

I would say the biggest consideration is actually mathematical consistency, not performance. Floating point variables have only so much precision. To calculate the current rate of change (say, velocity), without knowing the future, you take the prior sample (position) and the known ∆_t_ since that value. Since delta time is such a small value, the results will vary a lot and physical animations and behavior will look terrible, if each new sample has a different ∆_t_ between them. The rendering and discrete logic can stutter a little bit with only tiny consequences, but the smoothness of trajectory of moving objects is something that becomes astonishingly clear and “uncanny valley” or “unacceptably janky” with just a little stutter.

2 Likes

Although, I agree - if you are using large-valued absolute positions in the vicinity of the Observer, I suspect @zulo3d may be closer to the truth with the server-based timing considerations.
Hopefully, someone from @ashley_unity 's team can clarify these things.

Logically, the displacement per second (for constant velocity) should not change. If it does, then there is a mistake somewhere.

Because integrating motion using a variable timestep will lead to erratic object trajectories.

In layman term’s, this happens because physics engines assume velocities do not change direction or magnitude during a timestep. This is of course not true in the real world since time as we experience it is continuous instead of discrete, and velocities may change continuously. So they introduce a bit of error every timestep, and the amount of error introduced is directly proportional to the duration of the timestep (longer timesteps = more error).

If you introduce the same amount of error every timestep, then object trajectories will deviate from the physically correct solution but will still look smooth and plausible. Now if you have a wildly different amount of error each timestep, objects will appear to zig-zag mid air and the illusion of physics is pretty much broken.

You can check this yourself by making a very simple ballistic trajectory simulation, using Euler’s method for integration:

v = v + a * dt; // velocity
p = p + v * dt; // position

If you use the same time delta (dt) every step you’ll get a smooth curve. If you use a different dt, you’ll get a mess.

1 Like

Displacement per second won’t change indeed.

But if you keep your displacement per second constant for a small amount of time -aka your timestep- and only allow it to change from timestep to timestep, then you’re introducing some error. This error means you’re over or underestimating actual trajectories, by a small amount every timestep. This error of course accumulates over time.

So when you launch an object up in the air vertically, zulo3d is 100% right that it will go up higher the larger your timestep is.

Note all of this is not exclusive to Unity, or even to physics engines. It’s just how numerical integration works.

Yes, such error of course accumulates over time. One can correct for it and minimise it. It does not always accumulate in the same direction, at least that is what my research has found.

“zulo3d is 100% right that it will go up higher the larger your timestep is” implies that the error is always of the same sign. In my experience, this is not typical of floating point error.

I know these matters not exclusive to Unity, I used to write realtime systems, and I did my original research in this area on completely different 3D graphics technology.

My question is specific to Unity because that is what I am developing in now. Unity’s design decisions may well be similar to those of other graphics/game engines, of course.

For my thesis, I made empirical measurements on the numerical error of 3D computer graphics operations and graphed the error wrt to distance from the origin. The error does accumulate, as you say, but only in the same direction over short integrations. Over longer integrations, it fluctuates from positive to negative. It did not always accumulate in the same direction. The fluctuation increases exponentially with distance due to error propagation magnification.

So I was sceptical about the claim that increasing time-step will increase the jump height - but if that is your and @zulo3d 's experience that is very interesting.

This has nothing to do with the sign of floating point error, as in currentPosition.y - expectedPosition.y. It has to do with convergence and energy conservation. And this is tied to the dynamic properties of the system you’re simulating.

Suppose you were simulating a pendulum instead of ballistic motion: you’d get slightly shorter oscillations the longer your timestep is, so in this case would you say the error is both positive (when moving to the right) and negative (when moving to the left)? Or imagine you were simulating buoyancy instead of gravity: would you then say the error has the opposite sign?

Now, whether the numerical approximation converges towards or diverges away from the ground truth -which is what I believe you call sign- depends on the integration method used, among other considerations. Most physics engines nowadays use semi-implicit Euler, you can also find some using RK4 (4th order Runge-Kutta) or Verlet.

As you reduce the timestep length, the thing you should expect to happen is for the simulation to converge (approach the ground truth). As you increase the timestep, results will diverge. The actual sign of the positional error doesn’t really matter and has nothing to do with floating point inaccuracy.

Note it is typically undesirable for numerical simulations to diverge/overshoot, as this means error can accumulate to the point of taking over any meaningful data making the simulation unstable.

It’s worth posting this great article that covers some of that in a nice visual way: https://gafferongames.com/post/integration_basics/

2 Likes

I think you’re confusing two completely unrelated concepts here. When doing this kind of measurement on a physics simulation, you’re not just measuring error due to floating point representation inaccuracy, you’re also measuring numerical integration error.

As you stray away from the origin of a scene, coordinate numbers get larger. This requires more bits to represent the part of the number before the point, but leaves less bits to represent the part after the point (that’s why it’s called floating point). As a result, you sacrifice accuracy in order to be able to represent larger values. This is why the concept of a floating coordinate origin exists: to offset all coordinates in the scene so we can regain bits for the part after the point.

Completely orthogonal to this, you have error due to numerical integration. For physics simulations we assume time is a discrete quantity, and dynamic properties of objects (velocity, acceleration, etc) are only allowed to change in between time steps. This introduces an error w.r.t the expected results on top of floating point accuracy error, typically a lot larger than it, and regardless of distance to the origin.

So if you’re graphing positional error in a simulation over distance to the origin, you’re conflating both sources of error: near the origin the error will have mostly the same sign (because a higher % of it is due to integration) and far away from the origin the error will begin to sway from positive to negative (because floating point inaccuracy takes over, and it may under or over represent expected values).

1 Like

yes, after reading that article posted by @MelvMay , I can see what people are talking about.
The integration issues are a different thing from what I normally consider with games/simulation math error.
My focus wrt error is on minimising positional jitter that comes from a combination of floating point parameter magnitude and error propagation.

yep: that’s what my thesis states.

ok, thanks to you and @MelvMay for explaining.

I did not graph integrations over time. my bad in using the integration word. My graphs were for the numerical errors from floating point position (and calculation error propagation) wrt distance from origin.

Nice discussion, thank you all.

So would it be correct to say the first answer to my question is:
Unity chose a fixed-time loop for physics motion in order to minimise integration error in the positional motion equations, such as:
position += velocity * dt

1 Like

More or less: not to minimize error in an absolute, MSE sense, but to minimize error variance over time.

It’s fine if the position of an object is consistently slightly away from where it should be, but having it jitter around its expected trajectory (even if the average error is smaller) can be very visually jarring.

1 Like

More likely to maintain a level of determinism or preceived simulation consistency from the end users POV.

The cost is the complexity of it not always being in-sync with the visuals and having to mess around with interpolation etc. Nowadays you can select this as a drop-down option so it’s super easy.

1 Like

To add, the main thing you encounter when simulating per-frame is when the frame-rate tanks and the simulation becomes an absolute mess due to huge time-steps. In 2D we now have sub-stepping to deal with this.

Great, I will write that down :slight_smile:
So, next question: is the determinism improved by the choice of interpolation versus discrete?

Nope, interpolation works as a kind of post-process over physics state, only for rendering purposes. Interpolated data is not fed back to the next timestep, so it does not affect simulation behavior or determinism.

It only affects the perceived smoothness of motion, since you interpolate the state of objects during between the last 2 timesteps to get the position for the current frame. When not using interpolation (discrete) you just render the state of the most recent timestep, which leads to a stop-motion effect if your render frequency (framerate) is higher than your simulation frequency (1/timestep).

This article explains it way better than I do (see “the final touch” at the very end where it implements interpolation): https://gafferongames.com/post/fix_your_timestep/

2 Likes

@MelvMay One thing that should be noted about that article is that a small number of samples (like 10) is not normally considered as useful for a result of scientific significance. However, I would say that the harmonic trend graphs are more clearly significant.