I don’t know how your code is set up, but one gotcha with the transport package is that it only sends in the jobs scheduled by either ScheduleUpdate or ScheduleFlushSend. That is, calling EndSend doesn’t actually get anything on the wire. It basically just queues the packet for sending, and the actual socket operations will be performed in the job.
So assuming your code basically does something like this every frame:
Schedule an update of the driver.
Process events and send messages.
Once you send your ping it would only be actually sent on the next frame (so up to one frame of delay). Then the server will receive it at the beginning of its next frame, so again up to another frame of delay. It then receives the ping and sends its response, but again the actual send will only occur on the next frame. So up to another frame of delay. The client will then receive the response at the beginning of its next frame, incurring up to another frame of delay. That’s (if you’re unlucky) up to 4 frames between the initial send (call to EndSend) and receiving the response.
To improve this, you could schedule a send job with ScheduleFlushSend after you’ve processed events and sent messages. This should get your messages on the wire faster and improve latency. For example, Netcode for Entities will only schedule a single update job per frame, but will schedule send jobs at multiple points during a frame. The send job has been written to be relatively lightweight to allow these kinds of uses.
Thanks for the response Simon. It is quite insightful.
I’m already scheduling an update of the driver at the end of my systems OnUpdate method (I don’t use job since the OnUpdate already use the [BurstCompile] attribute). I’m also using a NativeQueue to schedule the outgoing messages. It looks like this for my client :
[BurstCompile]
public void OnUpdate(ref SystemState state)
{
ref var driver = ref networkDriver.Data;
ref var connection = ref clientConnection.Data;
if (!driver.IsCreated || !connection.IsCreated)
{
return;
}
// Ping request
if (Time.realtimeSinceStartup - lastPingRequestTime > PingInterval)
{
if (TrySendPingRequest(ref state))
{
lastPingRequestTime = Time.realtimeSinceStartup;
}
}
// Send messages
while (outgoingMessages.Data.TryDequeue(out OutgoingMessage msg))
{
switch (msg.Channel)
{
case NetworkChannel.Unreliable:
{
driver.BeginSend(connection, out var writer);
writer.WriteBytes(msg.Payload);
driver.EndSend(writer);
}
break;
case NetworkChannel.ReliableSequenced:
{
driver.BeginSend(reliablePipeline, connection, out var writer);
writer.WriteBytes(msg.Payload);
driver.EndSend(writer);
}
break;
default:
break;
}
msg.Dispose();
}
// Read messages
clientJobHandle = driver.ScheduleUpdate();
clientJobHandle.Complete();
}
The server code is quite similar. Just handling multiple NetworkConnections.
From what you said all the messages should be flushed by the time the clientJobHandle completes. Knowing this I would not expect the kind of latency I’m experimenting. It should be at max 2 frames if I am unlucky as you said.
I didn’t take time to really dig up the NetCode for Entities samples though. Maybe I should start looking how things are set up there.
Edit : As I read my reply I noticed that I’m actually sending the outgoing messages before handling the received ones, which is not how I designed it. I checked my server code and the sending is done after. This does not impact the ping time since my client is not responding to anything for this specific interaction but it does affect the reactiveness of my client for other actions.
When is the processing of incoming packets occurring here? If it’s done right before the ScheduleUpdate call, then there’s going to be at least a full frame of delay between a packet being received and it being processed.
The reason for this is that receives are similar to sends: we only touch the socket in a job. For receives that only happens in the ScheduleUpdate job however. The job basically pulls from the socket and puts the packets in a queue, and the data events are then drawn from that queue. So assuming that the processing of received packets happens right before the ScheduleUpdate call, here’s what’s going to happen:
A packet is received while (say) checking the outgoing message queue.
The code that processes new events will not see that packet since we haven’t pulled from the socket yet.
The ScheduleUpdate job is executed, pulls the received packet from the socket, and puts it in some queue.
In the next frame, the data event is popped from the driver and the packet is processed.
So basically there’s an extra full frame of delay added to the receive direction.
Ideally, data events would be processed immediately after the ScheduleUpdate job completes to reduce latency. And then once events are processed, which could have caused new packets to be sent, a send job would be scheduled to immediately send the responses. And if the processing of events is jobified, the whole thing can be scheduled as a chain of jobs, moving the entire network processing off the main thread.