We are developing a game where teams of ~5 players will fight. Players will join a queue and server will find appropriate opponents and teammates. Players can choose what to bring in game from items that they have in lobby. There will be no late joining. When game ends, players will return to lobby. Basically like MOBAs.
We already have essential gameplay done with UNET for testing the idea using networklobby example from assetstore. Now I’m doing research on possible solutions for rest of networking. UNET seems fine for gameplay, but I’m not sure we can do everything we want with it. It’s a complicated topic and every piece of information you can give is highly appreciated.
Thats what I think it should look like in the end.
-
Players connect to master server which handles all lobby activities and has a connection to database with all the player stats, inventory etc. I dont see any reason to use unity to make the master server part since none of the game engine parts of unity will be used anyways. So far i know there are network libraries like lidgren, that can be used to make a connection from unity client to non-unity server. But what about the scalability of this approach? What to do to make it able to handle large amount of clients eventually?
-
Master server will start a batchmode game server for clients in a match and make a unet connection between clients and gameserver. After the match players will return to lobby.
Most important question - am I moving in the right direction? Using a real server to host will remove NAT punchtrhough issues and need to use a relay server? What exactly is the server library in unity roadmap and will it make anything easier here? I don’t want to wait 4 months for nothing here. What solutions exist for master server?
I think you are heading in the right direction. That opinion is based on the fact that we already made one game with that architecture and are currently making another one with it.
To make this architecture handle a large amount of users I suggest to put another layer of servers between the game clients and the master server. In this post I will simply call them proxy servers. Those proxy servers should handle everything regarding one user, for example authentication, inventory management, (if not transaction based like it should be in user-to-user trades), store management (buying things with microtransactions) and persistent storage of everything in a database. The master server should handle everything regarding multiple users, for example game server management, party management and matchmaking. With that architecture there can be many proxy servers, but only one master server. In case the game clients connect only to the proxy servers and never see the master server’s IP address you even get a decent protection against DDoS targeted on the master server, the single point of failure in this architecture. In addition the whole thing only works if you host all servers yourself and never give any software to the user except the game client. Hosting all servers yourself and configuring all their firewalls yourself will even remove all NAT problems.
If not done completely wrong you could initially spare yourself the hassle to implement proxy servers and introduce them later on when you really get that many users that your lonely master server is overwhelmed. In that case, well, congratulations for the popularity…
Regarding network connectivity: In the first game we used the old Unity network between the game clients and the game servers. Between everything else we used encrypted Websocket connections, which are simple TCP connections plus a message framing, and a protocol based on JSON objects. In the seconds game we use encrypted TCP connections with message framing from the protobuf utility functions and a protocol based on protobuf. The old Unity network had not been robust enough for us and the new Unity network is not out long enough for us.
Our authoritative game servers are Unity-based, headless and run with the batchmode and nographics options on Linux. In the first game the master server had been written in JavaScript and ran on node.js because using Websockets and JSON objects is dead simple in node.js. In addition you can simple dump all network traffic into a file and debug the whole stuff manually with any text editor being able to pretty-print JSON objects - which is priceless in some cases. In the second game the master server will be pure C# on mono on Linux because then we have C# as universal language and Visual Studio as universal IDE in the whole project. Makes some things easier. protobuf was preferred over JSON for performance and network traffic reasons. But JSON is fine, too, and can be debugged easier.
I suggest to make your game server as configuration-less as possible. They all shall simply connect to the master server and get all the necessary configuration from it. Makes game server management a lot easier and more dynamic (for example “all game servers, please make 4 vs 4 instead of 5 vs 5 matches now” or “all game servers, please update yourself after the current match is finished” from the master server to all game servers).
Hope this helps. All the best in going that direction now!
Ya @TeeTeeHaa has some good advice. I strongly prefer anything that doesn’t route all traffic through some master server, and everything should be able to auto configure itself on deploy. I pretty much have this down to front end http load balanced servers that handle matchmaking and app server assignment, app servers that handle most everything else,and unity instances that just handle stuff that only unity can do well. In my case for a match, the http servers coordinate that and then I just have clients reconnect to the app server chosen for the match. In most cases my unity instances are actually managed by the app server itself, it starts a pool of them. So every physical server is one app server and up to 40 or so unity instances depending on need.
Handling unity instances is the tricky part. One approach to consider is don’t actually assign a unity instance per match. Sometimes you need to dedicate them to stuff, like if you want to use an instance to run a bunch of npc’s where you are making a lot of pathfinding calls or need to be constantly making raycasts, etc… But take a simple FPS game where all you really need is to do some LOS checks or maybe a spherecast here and there to see how many players that grenade hit. My entire combat system is in the non unity app server, and I just make rpc calls to a unity instance to perform those limited calculations. I could just pick any unity instance from a pool of instances all running the same scene and it would work. The unity instance just moves some capsule colliders into position, does the calculations, and returns the result.
Unity uses a lot of cpu per instance as it’s just not designed to be a server, and process management at scale is always a pain. Sometimes you don’t have a choice but my overall design is always use unity for what it’s actually needed for. With a good concurrency framework like Akka or Orleans a single app server could handle all the logic for a few hundred players easily. So you want to leverage that and try to use unity just for the stuff you can only do in unity well. Usually this comes down to physics and stuff like pathfinding.
Thank you guys for your enlightening answers.
@snacktime It’s an interesting idea to use one unity gameserver for multiple matches, though according to my calculations so far, running separate instance for each match wont cost us too much, but we will definitely consider this option. Handling multiple processes on the same machine seems reasonably easy to do through local “gameserver manager”, which runs unity server processes and communicates with masterserver. I’m probably missing something here, can you please elaborate on it? And in your implementation, do clients connect to unity processes directly or to local “app servers”
I’m currently looking into Photon Server framework and it seems to do everything we need for a reasonable pricetag also. Writing all that server stuff is a lot of work and in some sense harder than coding gameplay. We want to avoid reinventing the wheel wherever we can. Is there any possible drawbacks in using photon server anyone knows about?
So the way I do stuff is with Game Machine which is a project I’ve worked on for close to 3 years so I’ve had time to fine tune it. There are several ways that will work just fine including yours. Mine starts to be a lot more appealing at scale both in simplicity and cost. Unity chews up a noticeable chunk of cpu per instance and it’s designed to run as a client so it’s not even using the cpu effectively for a server environment. With game machine for example I use Akka which scales concurrency very well with a lot of cores. So with a single core both approaches are about the same, but as you add cores an app designed to be a server is going to start handling more work for the same amount of cpu then unity will, and it’s going to be by a larger factor for every core you add. At a certain point it’s fairly ridiculous.
All that said, I wouldn’t change how you are doing things significantly. It’s not worth rewriting what you have if you have done the numbers right and can make money that way. I do think it is worth looking at maybe not having an instance per match if you get to any scale, I think you would see a lot of cost savings doing that at some point.
With game machine clients connect to game machine not unity. The server side unity instances connect to game machine as a regular client. Game machine is an actual distributed system, so I designed the clients so they function as just a special type of node in the cluster they connect to. Every client whether it’s a player or running on the server is addressable from every other client and from every game machine node.
So in that paradigm it’s easy for me to do things like have my combat system call out to a unity instance to do a raycast for it and return the results. It looks like just another actor from the combat code, it doesn’t even know it’s a unity instance. It just calls a helper method saying I want the actor responsible for handling raycasting, then calls it. Behind the scenes there is a bunch of logic that handles assigning unity instances to actors pools and such, but to game logic it’s transparent.