Hi. I’m working on session based 2-10 multiplayer (realtime 3d action) game.
My game is hardly depend on Unity’s physics feature (collision, physics movements, and so on…) and I want to make server program by using same code and engine (unity) with client-side.
But It seems to that Unity doesn’t support for server-side development, especially multiple sessions in single instance.
I know unity has headless-mode, but single-session single-instance is impractical on real-world environments. (especially, server running cost)
So, how to develop server-side program by using Unity?, especially handling multiple game session in single unity instance.
If you really want to run multiple games in one instance maybe you can have the gameworld multiple times at different coordinates in your scene all running at once, but due to unity not being multi threaded this will be a very bad solution, especially since you need physics. If you have multiple instances, then every instance can run on a different core which would be a better solution.
I’m not sure what you mean by “My game is hardly depend on Unity’s physics feature”. Do you need physics or not?
As far as acting as a server for multiple game instances at the same time, I’m not aware of any of the high level networking API’s that are built for that. You can certainly do that with any lower level API of course, since you’re writing all the higher level functionality. If you’re using the physics system on the server, you’d probably want to isolate the different game instances into their own area of the world space so they don’t directly interact with other games.
If I make that single instance handles single session (10 players), it needs over 1000 unity server instances for processing 10000 simultaneous players. Is there any real-world case like this? (I mean, massive unity instances based multiplayer game)
Oh, I mean that unity server instance is single unity process So I think that I can distribute 1000 process into multiple servers (VM). (maybe 10~50 process per VM?)
I’ll profile this to check that this approach is feasible. Thanks.
10-50 instances per server might be more realistic, depending on complexity of the scenes and number of clients per instance. 256 users on 16 instances of an FPS isn’t unrealistic even on hardware from a decade ago, provided there’s bandwidth, and MMOs classically ran with shards of 2500-3000 clients (one server app instance per physical server). Some have even gone as far as 5000.
It’s all network architecture, really. Run as much as the servers can handle and make a decent load balancer for your task (or even use existing ones, if applicable).
It is certainly possible, but you’ll need to be very efficient with your design. You’re likely to encounter CPU/Memory bottlenecks when you start loading up server instances on a single physical or virtual server even if you have extremely efficient net code to handle that many clients.
For example, I’m currently building an MMO game where the game world is chopped up across multiple server instances. I have a master server that manages these instances and allocates physical or virtual hardware to these servers as players move around the game world (large world so locations where no players are currently playing in are kept not running, and are spun up as needed). My biggest bottleneck is CPU currently, where a single server instance under moderate load uses approximately 50% of a CPU core of the few year old AMD CPU I’m using in my test server. So this server’s 8 core CPU can support roughly 16 server instances before adding more starts to impact the performance of all of them significantly.
I’m sure I have plenty of room to optimize this a lot more, but I seriously doubt I could reach 1,000 server instances unless I made it crazy efficient and was using extremely high end and high capacity hardware. As far as cost, I’m sure it makes more sense spreading the server instances out across several lower end physical or virtual servers than trying to cram them all into a single high end server.
Yeah, MMO architecture usually has the less popular places sharing hardware with more areas. Designing and testing the architecture for your world can be a long process with only some general guidelines. Individual games might still have surprising bottlenecks or performance issues.
Take the auction house in WoW as an example: It’s in constant use by an absurd number of players, and probably a lot of bots. That subsystem needs its own hardware cluster and plenty of bandwidth, but isn’t too latency reliant. There are many operations which take several seconds to return a result. It also needs to be constantly running, because something needs to keep track of won/lost/expiring auctions and notify players.
A less popular instanced (i.e. started as needed) dungeon needs only a little bandwidth, but low latency networking and some processing power to handle positioning and collisions. A raid dungeon is more like an FPS map in resource requirements - now CPU and memory needs to scale up even more. The overland sections, especially city zones, are the most demanding, where you need to throw the everything at the task.
It’s not been uncommon in the past that MMOs shifted their less populated zones over to older hardware to save on costs. Nowadays you can scale it via containers or VMs with the ability to move them live to another server if demand changes. Pooling also works at this level, if you’re renting servers directly rather than ECS-style cloud servers where you pay for running time.
A company on a budget could start with Hetzner servers (a cross between consumer and enterprise hardware, so they’ve got all price ranges). They even have the option to rent old servers extra cheap, down to about 20 Euros for 2-3 generations old hardware if you’re lucky. It would really drive you to do your best on optimisation