Network messaging latencies very high?

Hi,

I’ve been trying to create a lock-step mechanism, and I had some issues. If I don’t delay my inputs by at least 6-7 frames, the simulation permanently falls behind.

I looked at the NetworkManager’s server & local time, and it seems the local time is reasonably accurate (within a frame’s precision at least, locally). However, at a network update rate of 60fps (and a game update rate of ~70 for both), the NetworkManager’s server tick on the two clients is usually off by multiple frames. Below are typical frame time samples linked to tick #2000 (server & local). System time is measured with QueryPerformanceCounter.

--- Server:
NetworkManager Time: Server: '2000' Local: '2000'
System Time: 1570964.1165525

--- Client:
NetworkManager Time: Server: '1994' Local: '2000'
System Time: 1570964.1082486

NetworkManager Time: Server: '2000' Local: '2005'
System Time: 1570964.199837

NetworkManager Time: Server: '2000' Local: '2006'
System Time: 1570964.2104752

I have a few observations:

  • Communication passes via a local network in this case. Ping tests to the outside are well below a frame length (around 5ms).
  • It seems to me the synchronization works well, but the network layer isn’t doing a very fast job.
  • UnityTransport.Update() triggers send & receive communication with the NetworkDriver class, and triggers the OnTransportEvent that’s used by the NetworkManager. It’s possible to trick this one by using SendMessage(“Update”).
  • Network Tick is triggered in the PreUpdate() (so there’s already a frame delay here by default w.r.t. UnityTransport, for the receive part)
  • It’s not clear to me where the communication(s) with the OS layer is done, so possibly we lose another frame here?. Can’t see further than Binding.Baselib_RegisteredNetwork_Socket_UDP_ScheduleRecv and friends. I would expect either the communication with the OS to be done during these calls, OR the receive to be done near the start of the frame and the send near the end of the frame (though I’m not any kind of networking expert).

It looks like we have a 6-frame latency for no good reason. This seems wasteful as a starting point. This is problematic for reactive games (fighting games etc) such as ours.

Did anyone else dig into this topic? Other experiences? What did you find? Are there solutions? Did I make a mistake?

hi, I had this problem too, the solution i found was to change the value of LocalBufferSec (In a simple and basic way, this causes the client to be ticks in front or behind the server). The problem is caused because NGO is made so that the client and server time are synchronized, meaning that if a message leaves the server at tick 20 it will reach the client at tick 20 (as long as you are using localtime and servertime correctly when sending messages). The problem is that in any network delay in the message will cause the message to arrive late.

The solution is to change the value of LocalBufferSec even if just setting a value could do the trick temporally wile developing, it will not fix it for every client as everyone would have different and changing latency so having a script that sets it for each client adapting the value to network changes was my way to go.

I made a temporal script wile developing but for your final game i would recommend to make something a bit more complex.

using UnityEngine;
using Unity.Netcode;

public class TicksTest : NetworkBehaviour
{
    int CurrentTick = 0;

    public override void OnNetworkSpawn()
    {
        if (IsServer || IsOwner)
        {
            NetworkManager.NetworkTickSystem.Tick += Tick;
        }
        base.OnNetworkSpawn();
    }



    private void Tick()
    {
        CurrentTick = NetworkManager.LocalTime.Tick;

        if (IsClient) SendTickServerRpc(CurrentTick);

    }

    [ServerRpc]
    private void SendTickServerRpc(int Tick)
    {
        if (Tick < (CurrentTick))
        {
            AdaptClientBufferClientRpc(true);
        }
        else if (Tick > (CurrentTick + 15/*tollerance*/))
        {
            AdaptClientBufferClientRpc(false);
        }

    }

    [ClientRpc]
    private void AdaptClientBufferClientRpc(bool add)
    {
        if (add)
        {
            print("Buffer time increased");
            NetworkManager.NetworkTimeSystem.LocalBufferSec += 0.01;
        }
        else
        {
            print("Buffer time decreased");
            NetworkManager.NetworkTimeSystem.LocalBufferSec -= 0.01;
        }

    }

}

Where it says “tollerance” set the value you consider for your game. This will set how many ticks in advance can the client be.

1 Like

This did not cross my mind again after I’d seen the variable, good catch. I understand how that can be a problem.

Unfortunately, even setting them to 0 does not noticeably change the results much for me. I assume the RPCs may be arriving a little faster, but a frame delay of 4 still causes me to fall behind immediately.

A value of 0 means no buffering, you need to increase it. Just add a line in the OnNetworkSpawn (do this only for the client, not the server) setting the buffer to a positive ammount and the client messages should arrive much later.

if(IsClient)NetworkManager.NetworkTimeSystem.LocalBufferSec += 0.5;
Adding that to the OnNetworkSpawn on your player script would basically make the client be 0.5 secconds in front of the server. Try that

1 Like

I don’t want them to arrive later, though, I want them to arrive as soon as possible.

I looked into using custom messages as well instead of RPCs (which are tick-bound if I understand correctly), but they suffer from a similar issue. For example, in detail:

  • I send a send a message early on in the update, it gets put in the MessagingSystem’s send queue.
  • The transport updates in the middle of the frame. This does not send the message because it’s not sent to the transport yet.
  • Then, in OnNetworkPostLateUpdate, the MessagingSystem processes the send queue and forwards the information to the transport layer.

Thus, at least a full frame passes before the message is even processed by the transport layer. The same happens for the incoming messages, which are processed almost a full frame after they’ve actually been received.

  • Early update: messaging system processes incoming message queue
  • transport layer update → adds incoming message to the messaging system’s queue

I considered using the transport directly, but there’s no clean way to make the netcode messaging system ignore a message (and that’s assuming I expose some internals to do things like mapping network manager client ids to transport client ids).

I switched to using custom messages and forcing the send/receive to happen when I want them to. I hacked my way into Unity.Netcode to make some of it happen, and the results are significantly better.

Code

// This code works with UnityTransport because it updates its network
// driver in the Update(). Other transports may not function the same.
public static void Exposed_ProcessIncomingMessageQueue(this NetworkManager mgr)
{
    mgr.NetworkConfig.NetworkTransport.SendMessage("Update");
    mgr.MessagingSystem.ProcessIncomingMessageQueue();
}
public static void Exposed_ProcessSendQueues(this NetworkManager mgr)
{
    mgr.MessagingSystem.ProcessSendQueues();
    mgr.NetworkConfig.NetworkTransport.SendMessage("Update");
}

In local, I get messaging latencies varying from ~3ms to ~16ms (receive timestamp minus send timestamp). This seems reasonable considering that’s about the time a frame takes. From the results, I deduce that data reception is properly happening in UnityTransport.Update, because I sometimes receive the message in the EarlyUpdate, and sometimes in my custom update. I can only assume the send functions similarly.

Perhaps it could be a good addition the netcode API to facilitate this kind of usage without having to expose internal functions (and perhaps make it work for other transports somehow)?