Around 2 years ago, I began work on a networked game with Unity. UNet couldn’t meet the throughput I required, and added many times my required transfers in bloat, and I moved to other libraries. I won’t name them, but the RUDP implementation in one of them was missing an R, another gave you zero control over many aspects of the network, so you were stuck with the default topology for configuration and headers, and the none of them were performant enough for my liking.
Dissatisfied with all of the options available, I began to roll a custom solution for the project. It struck me before long that by generating critical sections of the network, I could easily turn it into a highly configurable general networking solution, and for the past year or so, I’ve been working on it. The library is now in the final testing phase, and I’m using it to develop a global instance-based wargame with AWS.
There aren’t any set release dates yet as I keep adding new features and tweaking existing ones, and I want to make sure it’s solid before launch, but here I just want to get some initial feedback and see what sort of interest in the platform there might be.
What is it, exactly?
I’ll begin by being totally open, and saying that this isn’t exactly plug-and-play. It’s very easy to use, and it won’t take long to get upto speed with the workflow, but this isn’t a component-based networking system. This is a generation suite which you configure inside the editor, where you can build behaviours, define security protocols, set (or dynamise) the tickrate, work with compression settings per-element, set up call duplexity… I could go on.
Synapse is the bridge between ease-of-use, and a hand-rolled RUDP networking library. Only it will beat the hand-rolled variants quite easily!
Who is it for?
While the library was initially developed for Rust/Ark-like player-hosted servers, the configuration depth really does make it suitable for any authoritative project you can think of. As I mentioned, I’m now using it to develop an instanced game which is to be deployed globally across dedicated instances for matchmaking. For anyone requiring high-performance networking, in terms of the code itself, throughput, and the ability to crush your bandwidth, this is the tool suite for you.
Features
The tool suite revolves around this window. Simply enable or disable any features, tweak the core configuration, and save your preferences.
The generator will contextually emit your settings, literally rewriting the core library classes based on your settings to give you the best possible results for that set.
Take the example above; we have 16 players in the scene, and a manager. So we only need 17 networked objects at the most, right? Setting Max Object Count to 17 then shortens our object ID headers to 5 bits- everything in the library is automatically bitstreamed- maximum. Ever. All lookups will use 5 bits, wherever they are in the library. Note that the library uses GUID 0 for it’s internal communication, so the bitcount is actually the minimum bitcount to fit (1 + object count).
We can also specify the network tickrate here. As you can see, this is just the default; you can actually update this at runtime and have it automatically replicated across clients. I thought that this might come in handy for, say, a loading screen, where you don’t need to step your simulation so frequently; while minor, those computing resources could add up across many instances.
I won’t go into all of the config details here as it is just an intro post, but rest assured, it also ships with a help mode button, giving you inline assistance;
Given that customisation is such a huge focus of the library, I didn’t even want to limit users to the code generation format. You can also specify whether you want full source generation, your client code as source in the project and your server code as an external DLL (and vice versa, i.e. separate client/server builds if you’re doing a headless server), or whether you want the whole thing emitted as a DLL. This isn’t just an emission feature, either- not only will all unnecessary code be stripped, contextual optimisations will also be made when separate your client and server builds. Whatever your workflow, Synapse will integrate in some way.
And just like everything else in the library, it can all be done with a couple of clicks;
Synapse also includes many optional auxiliary modules. Want to ID players properly? Just enable PlayerInfo, and add your verification code. If you’re using Steam, for example, you can validate clients through the service, and ID them with their SteamID. Their ID will then be securely sent out to other clients, and can be used to save other data about them to the server. It also opens up the banning feature; with a persistent ID, we can screen banned users per-server, and automatically filter them out. You can also tie any other data you want to this section, such as a name for the player;
With PlayerInfo enabled, you can also use the builtin roles module. More intended for the first use case of the Rust/Ark-like community servers, this provides you with a very easy way to integrate an admin structure into your servers, with API calls generated to let players make authority calls (i.e. admins can kick other players, and their requests will automatically be verified to make sure they’re allowed to do so).
Again, these are totally auxiliary. If you do not want these features, simply uncheck the toggle and they won’t be generated. There is no overhead whatsoever, as everything is statically generated.
Synapse also ships with a builtin buffer feature;
Based on your network tickrate, you simply specify the number of frames to buffer. A raw binary buffer will be generated per behaviour, based on any buffer-enabled members, giving you the absolute most performant implementation in terms of both overhead and footprint.
Finally, on the main configuration side, we have the Beacon system. You can provide metadata for network discovery here. For example, with LAN Discovery (which only takes a couple of lines of code to set up), the data here will be fired to any requesting clients, so you can tell them things like the server’s name, and description, before they join.
Workflow
The workflow itself is actually very simple. You define contracts in the editor, and they are generated for you. You then create a child class, inherit the generated code, and implement your logic. Everything on the network side is taken care of for you behind the scenes. Update a networked field, and it’ll be set to dirty and automatically propagated on the next tick. Invoke an RPC- just like calling a normal method- and it’ll be queued for transmission on the next tick.
We begin with the contract metadata. You can generate contracts as MonoBehaviours, or pure objects, with no link to the engine. Synapse also uses a highly-performant internal update engine, which you can use here; simply tick any update methods you require, and override them in the base class for optimised class ticking.
Here, we define some networked fields for the contract. This may look a little overwhelming at first, but it’s actually very simple (and remember, the help mode button is always on hand to explain any features!). I won’t go into full detail here, because this post is starting to get a little on the long side for an announcement, but as you can see from the table header, there’s a lot you can customise. Whether you want one-click compression of your quaternions- the example here saving 86 bits per rotation, with only ~0.01 degrees maximum loss- header size limits for arrays, send rate limitation (say you have a constantly updated field, but only want it to be sent at 5Hz, while the network tickrate is 20Hz), which you can even make variable for updating at runtime, it’s all here. You can the full power of a hand-rolled implementation, through a very strong editor interface.
The process for RPCs is very similar. Add calls, give them parameters, and configure everything. You can also set up validation hooks here; a method will be generated, telling you that there is a request for this RPC. You then return a ValidationResult based on the request details; if it’s ok, the call will go through, otherwise security will be handled for you automatically, based on your global security configuration.
You can also make calls un/traceable, to save bandwidth. For calls where the client must know who sent it (i.e. another client), making the call traceable will also transmit the sender’s ID, allowing you to ID remotes from other clients without breaking authority.
RPCs also have a feature called duplexity. Full-duplex calls, i.e. calls which can be sent from the client to the server, and the server to the client, are handled as usual. Half-duplex calls, i.e. server->client only, client->server only, can be compressed to even further reduce bandwidth. Say you have an RPC to send input; the server is never going to call this, right? Only the client will send it’s input to the server. By making it half-duplex, we can compact it against any other half-duplex calls going the other way.
For example, where we have STOC (server to client) and CTOS (client to server), we can represent 8 call states (did/did not receive for 2 calls over 4 contexts) with a single bit;
Client Server
SendInput => WRITE ID 0 READ ID 0 => SendInput received
READ ID 0 => DoX Received DoX => WRITE ID 0
Just a little example of some of the optimisation which goes on behind the scenes. On a more technical note, behind the scenes really is the emphasis here. Whether it’s the bitstreaming system which operates on unmanaged memory entirely outside of the GC- allowing de/serialisation of anything in a matter of a couple of hundred nanoseconds- dynamic, intelligent compression utilities, or the custom fragmentation heaps designed to handle high-volume, critical allocative sections of the codebase, you can rest assured that your netcode sits on top of nothing but the highest performing systems, which you don’t even have to touch to feel the full benefit of.
Implementation
The implementation in incredibly straightforward. All of the networking time can be spent in the GUI. Once it comes to generation, all you have to do is implement the logic. There’s no messing around with lower level code required, nor is there any need to- any changes you would usually want to make to a networking library at the transport layer are already available to you through the GUI.
Summary
I hope that you’ve found some interest in Synapse. Again, there are no set release dates just yet, as I’m continually making adjustments and testing edge cases, but I’d love to gauge interest in the suite and see what people think of it.
Thanks for reading, and please feel free to ask any questions here!