Minor Questions regarding the FPS-Sample Video

Hi, @petera_unity and others;

Unfortunately, I was not able to attend Unite so I rely heavily on video material.

In the sample video
~40min mark we have the suggestion on 58/62 ticks instead of 60 and I have some questions about that.

1) Can someone elaborate on this phase of "predicting two extra frames down to zero" when using 60hz?
2) My guess is that the suggestion is on the simulation frame rate, does that mean the client is also using 58/62hz for the simulation instead of 60? What about the Hz of sending UserCommands and Snapshots?
3) If only the server simulation rate is changed, what is used as a delta time? Is it a fixed one? Is it always a variable one? Is it 16.6ms nonetheless or is it 17.2ms or 16.1ms.
4) If it's a variable delta time, why? Doesn't that make the simulation way more complex?

Thanks in advance,


This is a great question. I was writing up and answer but now I am actually not sure if my understanding is correct! I'll look into it a bit more and get back with a proper writeup.


Thanks Peter, looking forward to your answer!

Hey, don't want to push or anything because I know that @petera_unity probably has enough on his plate already.
But if anyone has strong opinions on those questions and might be able to provide an answer - I'd be very happy to have an ongoing discussion.


@poettlr with networking and what Unity is building/has provided, you are given full control over the network stack. I mean, you are the client and the server, and you compile the code. So you can get into the scripts and pull constants out and make them variables, and change things Unity is setting by default.

You are asking a bit about the over / under hz, that was explained in the video as they had issues with a 60 refresh rate, and a 60 update rate, as they were not in 'sync (off by fractions of a frame)' and so resulted in waiting EVERY update. If you varied update rates it by one full frame either way, the updates could run more smoothly without fractional frames~.

But, you can change anything if you wanted. You can increase the network update rates of the clients, and server, which would reduce your between frame MS. But, can the server keep up? Its a balance between what everything can do, within the time they are all allowed/have. Increase updates, normally cost you more BW/CPU time, or you need to reduce workloads(data) to keep the end results the same.

You can even kill off prediction if you like, but its a great deduplicated/compression direction Unity is heading after.


I'm totally aware of knowing that I can change everything. I just have not yet heard about the approach of different Hz on client and server, at least not on the simulation side. Sending and receiving can, of course, be done at a different rate.
That small detail is missing from the video and as far as my understanding goes using different simulation Hz will result in a different outcome at a different time.
Think of a block that moves to the left for 1 second;
C - Tickrate 60Hz - dt 16.6ms - moves with vel (2,0);
Results in a position change of 0.033 every frame.
C0 = Pos (0,0);
C60 = Pos (1.98,0);

If the server ticks at a different frequency and wants to do the same we have a different outcome
S - Tickrate 58Hz - dt 17.2ms - moves with vel (
Results in a position change of 0.034 every frame.
S0 = Pos(0,0);
S58 = Pos(1.972,0);
S60 = Pos(2.04,0);

So not only results that in a different position every frame we are also in no way comparable to reconcile anything.

Please correct me if I'm wrong but that can't be what @petera_unity meant, can it?

The server is the tick rate, and it sends down that rate to the clients. The clients can predict between server-updates at their own 'tick rate', but the server will correct any miss-predictions which would appear as a 'roll back' on the clients, to whatever the server ends up sending down for that update.

So in your example, if the client predict incorrectly, you will see "snapping" or "de-sync" at the client end, when the new network update/tick arrives, and says "this is whats is" and the client has to update to that new data. However, the client is sending its "move" commands at 60 updates per second, to the server. So the server gets the same count of move commands in your example, 60 counts of move. Then the server needs to limit how many move commands clients can send it per second, to prevent exploits.

When you then think about it like this, as the video talks about. The client sends 60 updates per second to the server, the server uses all 60 updates, but only sends back updates to the client 15 times per second, to correct the client, and make sure the client is predicting the same outcomes as the server is. It is not 'reconciled', the server is authoritative and just speaks what happens, the client must obey and correct model placements (~snap back.)

To expand on your example, if you had a client update rate of 120 frames per second, that client's box would move 120 times, send the server only 60 client-move inputs, and receives 15 network updates (authoritative movement commands) back from the server every second. So the client's prediction for 120 frames of motion, needs to be pretty close to the servers, to prevent the client from seeing small desync/snaps as the authoritative-word-of-god-on-what-is network updates the client. The faster the client is(fps/hz), the worst the snaps are, if prediction is off...

Luckily, with FPS sample, you are literally using the same code. As long as all the physics are the same between the server and client environments (all physics items being networked as well.) Which ties into why the server's physic's environment is authoritative, so it is always 'correct' across all clients.

That way clients can have different FPS/HZ locally based on their Hardware, and hence send out different updates per second as well (as they would be limited to FPS at the client, maybe? maybe ECS works on the network code, i'm not sure.) The server receives that client's ... 5 to 60 updates per second, and plugs it into the server math, which then pushes back out to all clients at 15 a second without requesting any validation of recipt. So the 15 updates per second, is the network pulse, the 'what we know for sure happen,' everything else locally is prediction of the future state and will be 'corrected' on the next network update-client side by a snap to whatever that network update says.

Hope this helps, and there is a BIG network re-do in the works.. I'd love to know an ETA on that.

Unfortunately, that's not what I'm asking. There is a difference in the render frame rate and the simulation frame rate and even the network frame rate. In no way, I want to argue against what you are saying because I don't have any experience with the FPS-Sample other than watching the video.
From my understanding and that's how I did my own multiplayer code is that you want to predict as accurately as possible. Hence a fixed time step for the simulation helps to do that and also simplifies the logic. (Its basically what Tim Ford and Blizzard did with Overwatch, it's somewhat Lock-Step on steroids - could be two totally different underlying approaches to be fair)

Of course, all that can be achieved using the DT between frames on the client and just run the simulation as fast as possible. Doing that should result in far more mispredicts and snapping than it would with a fixed timestep though because you don't have an exact frame of reference. Also, I don't know any mayor esports title other than cs-go that is not using a fixed simulation frame rate, maybe someone has more info on that.

My overall guess is I have to dig into the FPS-Sample way more and compare it on a broader level to my own multiplayer solution.


PS: Its super early here so if anything I wrote is unclear let me know and I try to rephrase it or get some numbers backing it up, also thanks for the discussion, seems there finally is someone that also has an understanding of multiplayer code. Let's hope we can all learn something new from further posts!

1 Like