Calculating latency c#

Hi guys,

on my project server and client talks by sending tcp streams. Im asking your OPINION how you would calculate latency between these two.

In the past i have used getTickCount().

If youā€™re creating a networking system where latency matters then you should be using UDP not TCP. You can probably find the right answer for your question on that site as well.

.Net has a Ping class:

The reply will contain information including the amount of time it took the message to complete its trip.

Note, this is round trip latency, not point to point latency.

Note the latency on TCP can pile up though due to packet jam, as packets pile up waiting for previous messages to ack back. This is why Dave Carlile is suggesting to use UDP.

If youā€™re forced for TCP, and youā€™re using sockets for instance, you can turn on the nagle algorithm:

This sets up a buffer that waits for several packets to pile up and send them all at once, when circumstances call for it. This can speed things up under certain conditions.

1 Like

Hi Dave and thank you for reply. Im using mixed TCP/UDP protrocol on my MMO project which has custom made async server and im running it non nagle mode so there is no packet buffering.

Hi Lord, im aware of Ping class. I want to use my own packet class, when you use Ping and when you target host, that only measures the connection between them - when i create packet and pass it to stream and work it out on my client i can see if there is something wrong happening between calls. Im using mixed async tcp/udp server to handle clients.

Im also aware of nagle and i also have writed class to handle clumped packets, currently im running non nagle mode. Im able to handle over 6000 clients concurrently while they all send 8 bytes data 10 times per second.

I also made some lookup for timer class and im going to use timeGetTime and timeBeginPeriod.

OK, so if Ping doesnā€™t measure what youā€™re expecting.

What do you want to measure?

Because from what I read of your OP, it sounded like you wanted the latency between client and server. This is usually measured as the time it takes a packet to travel between client and server, which ping can measure. If thatā€™s not what you want, what do you want?

You might be aware of this already and have compensated for it, but itā€™s usually inadvisable to mix UDP and TCP ā€“ TCPā€™s flow control algorithms measure packet loss to determine when to pull back on packet sends, UDP traffic tends to induce packet loss in TCP, and so itā€™s a recipe for TCP congestion. A better solution might be to use an existing open-source network solution (i.e lidgren) that has implemented reliable delivery on top of UDP so you get the ā€œbestā€ of both worlds.

TCP is used for crucial data like login ,inventory and such. UDP is used for less important things like position update.
Mixed protocol is widely used on big MMOā€™s, one example is WOW.

Im doing fine with TCP and UDP - Dont ask me to change what im doing and stay for topic: What you would use to calculate latency ?

Are you going to answer my last question?

Sorry Lord, i missed your post. The point of this post is written on first post:
ā€œIm asking your OPINION how you would calculate latencyā€

The packet which is going to travel between client and server has more than one job, thats why im not using Ping class, itā€™s more performance wise to use current packet structure class than introducing new object for every new client callback.

I hope this helps :slight_smile:

Define latency then.

Because if the latency hinges on the ā€œjobā€, then thatā€™s not latency as I know it.

Latency in networking terms usually means the amount of time that a single packet takes to travel some designated path across the network.

You asked for the latency between client and server, that being your path. Ping will tell you the latency of a single packetā€™s round trip path between 2 nodes on a network.

What is a job if not more than a single packet? And if a job is more than a single packet, then what is it you mean by ā€˜latencyā€™?

Are you asking to track the amount of time it takes to transmit a complete stream of data?

I honestly have no idea what you mean by ā€œjobā€ or ā€œlatencyā€.

Once youā€™ve cleared that up, Iā€™d be more than happy to brain storm with you on how Iā€™d go about measuring said duration of time.

Basically itā€™s keep alive packet. Which includes delay from server ā†’ client ā†’ back to server. When server reaches high population the process is going to slow down and that can cause delay which slow downs the reply time which causes higher latency. IF iā€™m pinging straight to the host itā€™s skipping the work cycle as iā€™m only pinging the port and ip.

OK, so you want to know what the overhead is of the service actually receiving a request and being queued up to handle it.

If you have a Keep Alive message, Iā€™m assuming your service has some ā€˜KeepAliveā€™ method that can be called, which immediately returns (maybe with a timestamp or somethingā€¦ Iā€™ll cover that later).

Recording the full round trip on the client side can be done with the 'System.Diagnostics.Stopwatch class:

var watch = new StopWatch();
watch.Start();
MyService.KeepAlive(); //access service method
watch.Stop();
var roundTripLatency = watch.Elapsed;

This will give you an idea of the latency of the message reaching the service, the service being queued up, and the KeepAlive message being processed and returned, and the time to make it back to the client.

You can do the same thing to measure any other service request as well, but I wouldnā€™t exactly call it latency. But rather just the measure of time it takes to complete a task.

If you need to know where in the trip the latency is coming from weā€™re going to get into more complicated territory. The time between client and server arenā€™t synchronized by default, so thereā€™s no way to compare timestamps between the two to get anything near an accurate time.

Now you can synchronize your clock in a rather simple manner using the KeepAlive request like I was describing.

If you record the ā€˜sentTimeā€™ before calling KeepAlive, on return you get the ā€˜currentTimeā€™, and the returned value is the ā€˜serverTimeā€™.

currentTime - sentTime = latency
currentTime - serverTime + (latency/2) = synchronizationDelta
now + synchronizationDelta = synchedTime

Now if you keep using ā€˜synchedTimeā€™ as your time, you can habitually do this like 4 or 5 times to get a more accurate synched time. Then with that be able to better estimate where the latency is.

If thatā€™s what you want.

Otherwise the timer above is just fine.

Thank you for effort, but donā€™t you think this would be better?

// pseudo code
DateTime CurTime;
long Latency=0;

// call this function every 5 seconds or so to keep client alive
void SentPacket()
{
// Get current time in ms
CurTime =  DateTime.Now;
// Send Latency to client ( it's 0 on first run)
send(Latency);
}

// this function is called when client has replied
void Onpacket()
{
// calculate latency
Latency= DateTime.Now - CurTime.Ticks;
}

DateTime isnā€™t as accurate as Stopwatch.

You can do it, but it depends on what you consider to be ā€œbetterā€.

Iā€™d probably use DateTime for time syncing, but Stopwatch for latency tracking.