Sort of. There isn’t any overhead per piece of data (it’s just dirty bit + payload). With UNET’s network reader/writer an integer consumes 1bit for dirtyMask and 4 bytes for data payload (if dirty). The exception to this is bools consume a dirty bit and 1byte (8bits) when they should consume 1 bit (but the reader/writer works in full bytes).
There is some overhead associated with the packet. I am not 100% sure what this contains but I believe it is
2byte: short MessageBase type
1-5byte: NetworkIdentity.id (packed uint I think)
1byte: bool initialState?
+low level packet overhead (unsure how much this is)
And for each NetworkBehaviour script there is a packed dirtybit mask (typically 1 byte, but up to 4bytes at a max of 32 SyncVar’s). So there more NetworkBehaviour’s you have the more overhead you have.
SyncLists work off a different system I think. They don’t use dirty bits and aren’t part of OnSerialize/OnDeserialize like syncVars are. Unsure of their overhead (I think they get a MessageBase sent anytime something changes).
That’s the best explanation out there, I think the current system is very well suited for general usage as we can’t assume ranges for values. As for the bool, maybe it’s possible to detect all bools that are getting serialized and pack every 8 in a byte with some shifting magic? I’m not sure how the code looks at your end, but it’s not big deal when overwriting the serialization logic at my end.
What I ended up doing is making a BitWriter/Reader that works in bits instead of bytes. Then as you say it rounds up to the nearest byte before sending the packet. This allows a few things.
1: Bools are written as 1bit instead of 8bits
2: Not limited to 32SyncVars
3: Can read/write float compressed with X # of bits. For 0deg-360deg angles I think I use like 12 bits.
4: Avoids overhead of dirtyBit masks (I have like 30 NetworkBehaviour scripts on the player… so that’s 8x30 or 240bits of data overhead anytime it’s serialized). Instead if there’s 0 syncVars then there’s 0bits of dirty overhead. If there’s 3 SyncVars then there’s 3 bits (instead of the 8-40 bits/script like UNET has).
5: And at a lower level it lets me send data bidirectional on the same object (instead of everything Client → Server or Server → Client I can do both.) For example position data is serialized from Client(owner) → Server → Clients(other). But other data such as animations are serialized from Server → Clients(all)
In case someone will be looking for a way to serialize data in the LLAPI, here is my class, you are free to use it.
How to use it:
float test = 3.5f;
CSerializer serializer = new CSerializer(1500); //call this only once
//now every time you want to serialize use PrepareSerialization() or PrepareDeserialization()
serializer.PrepareSerialization();
serializer.Serialize(ref test);
//to deserialize use
serializer.PrepareDeserialization(data); //data is a byte[] and contains data that you will be deserializing
//your float variable has been serialized to serializer.buffer to take the data size use serializer.dataSize
//this class stores 8 bool values in one byte
It took me a long time to do. I ended up using MessageBase for everything except the initialState data.
But basically every script has a flag… it’s either owned by server or client (as opposed to the entire object being owned by server or client). This is the fundamental change that allows for bi-directional communication on the same object . Action commands and position are Client(owner) → Server → Client(other) and most other things are Server → Client(all) such as stats, animations, audio, etc.
I use MessageBase to send everything to commonize code. So I can use the same code sending from Server → Client as from Client → Server.
The tricky part is stuff like hero position. The server instantiates a new hero for a newly connected client. The server sets the initial position of the hero. So the initial position data is Server → Client(all). Then the player object is spawned on the owner client and it is assigned control of the position. Position script then sends data from Client(owner) → Server → Client(other). I do this by when the server receives a MessageBase from a client it forwards the data to other clients (cheat proofing could be placed here if you wanted). If a new client connects it gets the latest data from the Server (since the owner client cannot send directly to the new client). For monsters that are always owned by the server it’s much simpler because all position data is always Server → Client(all).
I had to bandaid a lot of UNET bugs to get this to work. For example UNET re-spawns the same object repeatedly each scene change. I had to move away from RPC/Command because they aren’t robust and aren’t always delivered due to NetworkIdentity.observers dropping off. And another tricky bug is when a new client connects all data is pushed to the new client but existing clients do not receive any dirty data. This bug causes SyncVars to desync between clients. You must push all data to existing clients and clear dirty bits before sending full data to the newly connected client to prevent de-syncing.
There were so many bugs I have since abandoned UNET (it’s so unrobust and it seems it’s not really supported anymore? It seems like just 1 guy working on it.) and have moved to Steamworks.NET but I’m using a similar network strategy as above.