By the way, I implemented a pre-allocation mechanism for tasks, but the funny thing is that it didn’t affect the results. The cost of memory allocation is very cheap, so yea, I didn’t add this feature to the application. It’s just a waste of time, and lines of source code.
Sorry it took so long to get this out. A bit rough I’ll probably polish it up some over the next week or so.
FYI this is basically the collection of techniques for optimizing data for realtime games that I’ve worked out over the years.
Ah, that’s nice, Chris. Personally, I’ve never used Protobuf for reasons. I just in love with MessagePack.
By the way, I would add an integer encoding to these techniques. In some cases, this thing really helps.
The MLAPI has some sweet BitWriters & BitReaders that write everything as VarInts with ZigZag encoding similar to Protobuf. Also writes bools as bits not bytes etc.
GitHub - TwoTenPvP/MLAPI: A game networking framework built for the Unity Engine to abstract game networking concepts. Here you can read about it
And here is source
That’s what varint encoding does. Hence why protobuf. You get it all in one package.
Well protobuf isn’t that nice on your performance, especially not for realtime games. Mainly due to heap allocation. The BitWriter we have essentially has a list pool where you can stack objects. Thus it doesn’t expand. And you don’t have to allocate when writing. You can write to a pre allocated buffer. So internally, the MLAPI uses this and it results in almost no allocations when writing the headers.
Flatbuffers is also very intresting tho, allows random read access etc.
varint encoding variants have a lot of research being done on them, like this here:
https://lemire.me/blog/2017/09/27/stream-vbyte-breaking-new-speed-records-for-integer-compression/
id compression is a big deal in stuff like search engines.
FYI heap allocation is not really a protocol buffer issue per say. I have no per message allocation in my setup using protobuf-net combined with DotNetty. Combination of using ArrayPool and ByteBuffers.
Flatbuffers is good on memory bad on space. Not really suited for realtime games.
@snacktime Oh, now I see.
Great stuff guys!
The thing that I don’t like in those serialization libraries is a schema pre-compilation. This is one of the reasons why I prefer MessagePack where a class/struct itself is the schema.
Yea, the buffer pooling is used everywhere typically. I’ve backported the System.Buffers from .NET Core to Unity with some changes to keep it thread-safe with .NET 3.5.
Hi there. Don’t you know how to specify ZigZag encoding in .proto file, instead of runtime serialization?
.proto files aren’t generally used with protobuf-net, in preference of the more idiomatic approach with attributes
Added a section on the basic approach to zero GC serialization/deserialization. Uses protobuf-net as the example but should work for any library that provides a Merge functionality for deserialization. Also uses System.Buffers, although creating your own byte[ ] pool isn’t hard.
Well, since you shared this, people will need the buffers library itself. So, [link removed] backported version for Unity.
By the way, I would like to know how you handle scope/area of interest.
As in tracking stuff in range of a point?
Yep. I know that you are using a concurrent fixed array for this, so I think it would be nice if you will add more information about it.
Great news guys. Valve added a flat interface, so one more attempt to integrate it with the application.
So I actually use a couple of different approaches depending on the context.
Originally I started out using spatial hashing.
Spatial hashing is popular but it has a few downsides.
- It doesn’t give you exact precision on distance.
- You need to create a separate hash with different cell sizes for each distance range you want to query with.
The good thing about it is it scales well. Doesn’t matter whether you have 200 or 200,000 entities the performance is the same. The base cost for updating and querying is higher though. Query results have to be read from cells and written to an array. A non alloc api for it is easy enough, but it is considerably slower then just iterating over a single array. It’s how it scales is where it shines.
Just a note on spatial hashing vs quad trees. Quad trees give more precision. But most require regenerating the entire tree when you update anything. Generally these work best for static data, where you are not adding/removing entitie and the entities don’t move.
The thing is I think the norm is that you care about the precision, and are working with a relatively small number of entities, few hundred at most. And linear iteration plus Vector2 distance checking is really quite cheap in that case.
The concurrent array thing was to find something that worked well for the linear search pattern. .Net concurrent structures that were appropriate like ConcurrentDictionary, allocate on iterating the values because everything is in buckets. There is no single backing array it has to allocate a new list on every call to Values.
So the concurrent array has a single backing array. A concurrent queue and concurrent dictionary to manage entity id’s and map those to backing array indexes. It uses an optimistic lock when writing to the backing array. You can access entities by id as well as iterating over the backing array directly.
So it it’s guaranteed to write a complete entity safely or not at all to the backing array. But it’s not guaranteed that the write itself won’t fail. This is done via Interlocked.CompareExchange and we just ignore the result. But we don’t really care about that because the only cases where you might have two threads writing the same entity are for stuff like when you remove the entity.
Yea, hashing is what I’m currently using, and I’m still looking for a better approach. Thank you.
@nxrighthere sorry I don’t want to annoy you but @arcturgray suggest here
So i’m a bit lost what changes is referring to?
This and this one. By the way, the original wrapper is not ideal and requires a lot of changes. I would like to share my private repository, but it’s no longer compatible with original ENet, unfortunately.
Thanks. I’ll try to make it work:)