This started from a thread about keeping networked clients in sync.
When is time.deltaTime NOT “The completion time in seconds since the last frame.”
It was pointed out time.deltaTime is not the time since the last frame if Time.timeScale <> 1.
What happens to Time.deltaTime if long execution time code is placed in the Update() loop?
What about if the system becomes busy with non-script related delays, such as filesystem reads and writes, or other non-game processes using up cycles, etc.
In my game, I am seeing the cumulative time.DeltaTime become inaccurate on the scale of 2-3 seconds over a 4 minute period. It seems to happen when the game needs to hit the hard drive for saving some data.
Is there any in depth write up for how Time.deltaTime is derived for Windows based systems? Or more information on when time.deltaTime does not equal the value stated in the documentation here?
Time.deltaTime will not change for the entirety of the execution of a single frame. That means that:
void Update() {
Debug.Log("Time is "+Time.deltaTime);
SomeFunctionThatRunsAndFreezesYourGameForTenMinutes();
Debug.Log("Time is "+Time.deltaTime);
}
void LateUpdate() {
Debug.Log("Time is "+Time.deltaTime);
}
Will output the same number 3 times. This is by design - for a game’s visuals to look correct, every object must be simulated expecting to be at the exact same time. Otherwise, two objects moving in parallel could easily get out of sync with each other if another object doing some heavy processing happens to be running in between them.
What happens on the frame after that, however…
I think there is a maximum Time.deltaTime value. I don’t know what it is offhand (and I agree this should be marked in the documentation), but if you have any frames that take a particularly long time, they could exceed that and you’ll lose a bit of time.
You can try keeping track of the difference each frame of Time.time and compare that difference to Time.deltaTime, and when they are not equal (or within some epsilon), output a log. If you’re so inclined, you could use this data to confirm your assessment that you’re losing time on the deltaTime-based tracker, and report a documentation bug (via Bug Reporter) with the information you find.
There’s one other sidenote: if you do find yourself with frames longer than a fraction of a second, you should consider either spreading work across multiple frames or offloading the time-intensive work onto a separate thread.
@OP
In other words: The engine pretends to only have spend that time for the frame, even if it took much longer. This shall ensure that physics have the chance to catch up in an acceptable way. Without that, a heavy physics update performed multiple times in order to catch up multiple seconds could make this scenario even worse, as it would cause yet again high (or even higher) deltas.
If you do heavy stuff like file IO, you might quickly run into this situation. The value mentioned above defaults to 1/3 of a second (in 2017.xx). That is, if you hang for 3 seconds, you’ll lose roughly 2,67 seconds of “real time”, whereas the engine itself behaves as if this did not happen and continues with its own timer as if everything took maxDeltaTime.
There are surely ways to compensate it, but stuff like IO is often better to be done in some asynchronous way, as already recommened by @StarManta .
I was completely unaware of Time.maximumDeltaTime, but that sounds exactly like what I am encountering. I’ll test this theory out with Starmanta’s suggestion. As Time.timeScale is document with Time.deltaTime, I think this should be too. I’ll look at doing the IO asynchronously, but multi-threaded stuff gets intractable for me pretty quickly.