Receive UDP with no allocations

_UdpClient.Client.ReceiveFrom(ReceiveBuffer, 0, ReceiveBuffer.Length, SocketFlags.None, ref endPoint);

So I have something like this, however it’s producing garbage on every packet received, even though it’s writing into an existing buffer. Is there any way to make it not produce any garbage?

https://referencesource.microsoft.com/#System/net/System/Net/Sockets/Socket.cs,afb04b2ee3910a41

Here’s the internal code for ReceiveFrom. remoteEP = endPointSnapshot.Create(socketAddress);
Could this line be causing the allocations? Could someone assist me in fixing it?

Yes, it does, but it’s not the only issue in the internal implementation of the ReceiveFrom method. At the very beginning of the code it also does

SocketAddress sockaddr = remoteEP.Serialize();

This will create a new SocketAddress based on the one stored in the endpoint. This SocketAddress is what is actually fed to the internal receivefrom call that will eventually end up in native code. Besides the fact that this internal method requires an unsafe context, the created endpoint at the end is stored for later use by the socket. I’m not really sure how important this seed_endpoint really is.

Anyways you could try to create your own ReceiveFrom wrapper. Though that requires writing unsafe code, so you need an unsafe context and also need some reflection (just once at the beginning) to be able to access the internal method directly.

So this is more like a topic for a high performance network library which handles all that. So there’s no “easy fix” for this problem. A lot of the stock .NET / Mono framework stuff is not really designed to reduce garbage.

I quickly give it a shot and directly calling my wrapped “ReceiveFrom_internal” and the allocated memory went down from 450 bytes to 90 bytes. However those 90 bytes seem to be allocated internally in the innermost “ReceiveFrom_internal” implementation of the mono framework. That said, wrapping this specific internal implementation is probably not a good idea either since the internal implementation is vastly different depending on the used framework (Mono, .NET, .NETStandard, IL2CPP). So if your target platform is windows the best approach would be to directly call the native winsock method “recvfrom”. However you have to take care of everything yourself. The socket address argument is quite fishy as it does not have a fix / clear format and varies depending on the used protocol. So this part is the most dangerous one. The SocketAddress class does handle some of this stuff. Though in .NET Standard the SocketAddress class seems to be internal and is only used to “translate” the native byte array into an IPEndPoint.

So in order to receive UDP messages without any garbage allocations, you would have to communicate with the win socket interface manually. Though if your target platform is not windows, you have to find / implement the corresponding native implementation for your target platform.

Finally the question is if that’s all worth the efford. You may just look out for a ready-to-use library that is optimised to reduce / remove any allocations.

1 Like