Multiplayer and Double Precision Coordinates Discussion

@Max-om ,

I’m replying to your question here to avoid going off-topic in the original thread. I wrote a WebSocket server which I host across Azure App Service instances. The benefit of using Azure in my scenario is substantially reduced operating cost and tight integration with existing server-side game logic which already runs in Azure. In my scenario, I did not need (or want) server-side instances of the Unity player. The server scales to large numbers of concurrent users with what I consider to be reasonable cost.

There are disadvantages, i.e. not having a Unity player server-side means I need to engage in tricks to address physics, which may be a separate discussion.

I’m sending/receiving single precision transforms between client and server. Received positions are stored separate from the Unity hierarchy. The hierarchy is then updated from this data at single precision. This networking logic could be changed to accommodate double precision, but I’m not sure it would be of much use since Unity physics and transforms are still single precision. I’m not planning to use DOTS right now.

In short, I am shifting the origin, and then applying the shift to both incoming and outgoing network data. Since the origin is shifted with respect to the player, I get “good enough” precision at greater distances.

No idea if this works for your scenario Max-Om. I’m just sharing what I am doing. The discussion is helpful for me as well.

Not really a solution for us since our game heavily relies on physics. But if you have a custom server you should be able to use world streamer 2 for the clients since it has floating origin built in.

I also implemented my own lightweight server (for Allspace) to run in AWS instead of using a server build of Unity.

When building your own solution, remember that you have full control over everything. You can use what ever precision you need to achieve your goals on the server, client, and over the network. For example, you could use doubles for storing some positions, but then convert to floats to render. And you can use 16 bit integers over the network for relative positions, as long as it is obvious to the client and server what the position is relative to. Additionally, you can use three 16 bit integers for rotation data over the network instead of using four floats.

I often intentionally design my network packets using the lowest precision I can get away with, and then use more precision on clients and servers. For example, floats are often used for storing health values in clients and servers, but I will often use one byte for sending the health data over the network. Additionally, I will skip a dead/alive bool value and use the health byte to include that bool. If health byte is 0, then health float is 0.0f and Alive=false. If health byte is not zero, then assume Alive=true cast float health from byte health.

Anyway, when designing the UDP network packets from scratch, I always think of every possible trick to reduce the size of the data being sent over the network.

1 Like

Hey Shiloh. I appreciate the knowledge you’ve gained by building two space combat sims. There aren’t many devs out there who I can chat with, that have this kind of hard-earned experience. If you do not mind, would it be alright to ask you a few questions?

Yes.

For your networking, with what kind of frequency did you transmit packets? I ask because I am not seeing much difference between 30ms (33hz) and 300ms (3.3hz) due to successful lerping and I am unsure what a good default is. Did you run into any networking pitfalls that only manifest when deployed and played by actual users, i.e. high latency scenarios?

Did you transmit anything else (for ephemeris) besides position and rotation (i.e. angular velocity)?

For your asteroids or station super structures, did you enforce physics client-side or server-side? Were there things that you did not implement server-side authoritative, and it worked well?

I understand the battles were limited to 100 players – when you finished optimizing, approximately how many connected clients were you able to support per AWS instance?

How often did you encounter support issues due to unexpected GPU compatibility (i.e. Nvidia vs Radeon vs Intel Integrated)?

Did you support VR from day one, or was this something you added later based on user demand?

Thank you! Thank you! Thank you! :slight_smile:

For network packet frequency, I strongly prefer lots of small packets. I like to use 100 tick or faster. I have also worked on lower tick systems that had good hit reg despite the low tick.

I try to reduce the amount of information handled over the network. For example, I would not typically send angular velocity over the network. But the clients should be able to quickly compute that based on recent packets, especially when each packet includes time data. By time data, that could be a float or double network time or a long int tick number. Every networking solution needs to have an easy way to explain what moment in time each packet is relevant to.

I did asteroids in Disputed Space, which was a solo game with a local coop split screen option. I did not do asteroids in Allspace, which was the online multiplayer game. If I had done asteroids in Allspace, the netcode would have been more complicated. I would have probably set up network packets for any significant change to each asteroid. Significant changes to asteroids would include collisions or being blown up. I would not send asteroid data from the server every frame, because that would be a lot of data.

In Allspace, the stations were stationary, and the physics on those were client side enforced.

In Allspace, I advertised 100 players per game server instance. Each game server instance was actually a separate thread in custom server code. One $5 per month AWS Lightsail instance could run 100 threads of my 100 player game server for a total of 10,000 simultaneous clients. I ran some synthetic tests to simulate client loads, and the solution worked well with a simulated 10k client load. Unfortunately, I never had enough simultaneous real players to know how well it scaled beyond synthetic testing.

I even implemented match making that automatically split players between geo regions (Asia, North America, and Europe).

I have seen support issues from some users with integrated Intel graphics. When I first launched Allspace, I had this awesome explosion for ships. Unfortunately, some old Intel integrated graphics would simply show the entire screen completely white instead of showing the explosion. I had to reduce the explosion effects quite a bit to get them to render on those old Intel integrated graphics computers.

I have not seen any weird support issues with AMD or Nvidia products.

I added the VR support during the early access phase.

1 Like

Thank you so much @ShilohGames !

The 100 tick interval you mention – do you mean one network update per 100 physics frame ticks via FixedUpdate? Or do you mean a tick as in, 1 tick = 100ns? That would be crazy fast, as 100 ticks = 1ms.

I absolutely enjoy hearing this. I think what you accomplished is just amazing. It is worth pondering in light of the development timeline for Unity multiplayer.

1 Like

100hz. I can’t really see that tickrate being necessary in a space game unless your ships can turn on a dime.

Edit: though because of interpolation latency does go down with higher tickrate. since latency is the time between interpolation 2 or more frames plus any network latency. But then again. You don’t need that low latency in a space game. It’s more important in a shooter.

100 tick is 100 Hz or one frame every 10ms. A high tick rate is helpful in first person shooters. It is not needed in every game, though.

1 Like

When designing the perfect networking for your game, it is important to be very clear about the needs of your game. In Allspace, the high tick rate was a benefit. In Allspace, there would be lots of small ships flying around quickly and changing directions often. A low tick rate would have lead to more mistakes in the client side interpolation and extrapolation.

But if a space game has a few large capital ships (instead of a bunch of small, quick fighters), then the high tick rate would be far less helpful. With large capital ships, you could send packets about movement intentions that included timestamp, start position, end position, and velocity. Then each client could slowly interpolate each capital ship’s movement on the client side. You could simply send those packets when there was a significant change to a ship’s movement intention. That could be done at a very low tick rate with UDP, or it could even be done only when needed with RUDP.

Anyway, the network optimizations for a game about a small number of large slow moving ships is different from the ones for a large number of small fast moving ships.

1 Like

Multiplayer domain compression is fun to play around with. In our game we send the origin of the player as a float. 4 bytes. But the head and hands (VR game) are sent as shorts giving us a span of plus/minus 3,2767 meter with sub millimeter precision at half the packet size

2 Likes