I hit an issue today where I was scratching my head debugging a timing issue, then I realized what was probably happening is that my Host (a cloned project instance) was running in the background at a much lower framerate, and I think that’s affecting testing quite a bit. I am not, however, very confident that’s the case.
For instance, I’m working on adding a touch of polish to throwing objects via a latency hiding animation on the Client, and I couldn’t for the life of me figure out why my client/server Time values are so far off. Here’s an example:
This is with 40ms network latency and 10ms latency variability in the NetworkManager, which I would totally expect to see here. However, what I’m seeing are these timings being regularly over 250ms different!
This is with NO artificial latency or latency variability:
Still >200ms difference.
What’s GREAT news for me, is that I’ve built the whole project so far using these timings for synced abilities without realizing there’s SO MUCH latency here, and everything feels pretty good. Awesome
Throwing, however, is the first kind of tricky situation where I’m doing shenanigans to hide latency by scaling the CLIENT animation speed to be slower than the SERVER to get things to synchronize how I need (throwing is actually a fairly complex case that can trigger a host of other events that I can’t possibly accurately predict/unwind.)
So, is what I’m seeing expected with the NetworkManager.ServerTime/LocalTime difference?
Is there a way to ensure a background cloned project is running at the same framerate as a focused process?
I have not noticed any issues with synchronized projects when using ParrelSync. Though there was at least one report by a user about such an issue, I think that user was running a build + editor and experienced lag due to either of the two dropping fps while in background.
Depending on whether you use ParrelSync or Unity’s Multiplay or just two separate instances of the editor, try either ParrelSync or Multiplay to check if the issue goes away.
I also tend to believe there is a setting somewhere under Player settings, or Preferences/Project settings, that affects how the editor handles refreshing / framerate.
Well I grabbed it, and it looks like a newer version of what I’d been using (ProjectCloner). I’m still seeing 150+ms of difference between the ServerTime and LocalTime. I tried tweaking the TickRate and other settings in the NetworkManager to no avail.
I’ll dig into the code a bit tomorrow, see if I can discover anything. But my expectation around how this works is that the LocalTime is based on a Client ping to the server, using that delay to run that far “ahead” of the server. If that’s the case, I would totally expect the ServerTime and LocalTime to be maybe 1 tick different (~30ms) but who knows. Just a guess, we’ll find out tomorrow
I have, and I’m still seeing a really large difference:
Over 170ms! My Network tick is 30Hz, but even bumping that up to 60 or 100Hz doesn’t seem to help (though the difference between LocalTime and ServerTime falls to ~100ms @ 100Hz network tick.)
I’ve changed the Max Payload Size and Max Send Queue Size to be much higher than default, but lowering them hasn’t changed anything (not that I would expect it to honestly.)
I’ve got a few small bugs to fix today so I’m going to knock those out and then dig into this a bit… Maybe the large ServerTime and LocalTime difference I’m seeing is intended but it’s way outside what I would expect to see.
170ms for two builds running locally is indeed too high.
This might also occur due to software or driver interference though. If you can, do the latency test on a separate machine.
After a bit of digging, I think my problem is related to some massive latency spike when the client is first connecting that just never gets corrected (OR seems to get corrected extremely slowly over time.) Here are the NetworkTime values and round trip times (RTT) of the first few frames after a client connects:
The first frame where the client Player Network Objects gets spawned and run has a Network Time difference of > 233 ms with RTT staying pretty high for a few frames, then dropping to something much more in line with expectations (~25ms)
There are a few strange things I’m seeing here:
When a client joins, there’s about 120ms of frame time in the NetworkManager doing initialization stuff in that remote client.
Then, it creates a NetworkPlayerController which initializes a new Main Camera. This causes a single ~100ms frame
So when a client connects the NetworkTimeSystem is getting some huge ping times which throws everything off.
I’ve tried NetworkTimeSystem.Reset() after a brief delay but can’t seem to get it to do anything. I’ve also tried changing AdjustmentRatio and HardResetThresholdSec on the remote client with no apparent effect. The remote client is still ~200ms ahead of the server, where I would expect this to be maybe 20ms at most in my current testing setup.
I’ll keep poking at this but it’s not something I can afford to spend a bunch of time on. Right now it’s causing some significant issues with synchronized ability usage since there’s almost 500ms of artificial latency at times, which means animation hiding to disguise network latency (which I’d like to use for several synchronized abilities) isn’t viable and adds several hundred milliseconds to abilities that take 250-500ms.
But I’ll dig a bit more and post what I find.
Edit: Quick note, this settles down to ~100ms after a few minutes which is closer to what I would expect but still seems quite high for round trip between two instances on the same machine.