I have a simple card game where each game hosts 8 clients, I want to run parallel with as many instances as possible of these 8 game card matches.
All players will drop their card at the end of a turn (turns last 20 seconds to a min) where I only need to send one message of each client of an ushort (card dropped) and then send back all the other player cards to each of the clients in a ushort array. Then process the logic for cards like attack, defending, and special cases that involve a little bit of hashing, lookups, special logic, etc.
Players decks will be pulled from a separate API that’s not running on the server.
what would be the best way to go about having each of these games of 8 player in isolation for all the other instances, using NetCode?
With what you know about the game, what could I realistically expect in terms of games you roughly estimate I could handle with this system running ECS for players and cards?
I think using Netcode for Entities for this purpose seems to be overkill. Given the scenario, it is fair better to just probably write a custom little server layer for that purpose that uses Transport directly. This will give you more options.
That being said, you can use Netcode for Entities for this, but I would then probably focus on only sending RPCs and not using ghosts. There is nothing that worth sending data to the client beside the update after the simulation for the player turn, that is happening at discrete time.
For the tenancy part:
If you want complete data isolation having multiple server ServerWorld per game is what could give you the best isolation. However, we don’t have any buiit-in way in Entities to run each world in a separated thread or core. We are limited to make them reside in the same process and the world update called by the main thread. Although this is a limitation, each world can still schedule jobs that are executing in parallel. We don’t have a way yet to execute each world update in its own thread.
A fair less strong data separation is to have each game live in the same server instance, but the data separated using SharedComponents (or something like that). But then all clients live in the same game, effectively opening the door for bugs that can affect other players/game. So It is no the best approach. In term of scalability, you can schedule all the games update in parallel via jobs (by disabling some safety restrictions).
We are somewhat limited by the fact the game should still run using a Unity Player, and that the world update is done by the player-loop, that is inherently single threaded.
However, noting prevent to run multiple player instance per server, potentially one per core, and trying to handle as much as possible on a per core basis, using a multiple-world per process approach.
With pure Netcode for Entities, but maybe disabling some systems on the server you know their are not ever going to run to reduce even further overheads, given the pretty low complexity of the server computation and the discrete nature of the simulation, that means server will only simulate something when it receive the client round move, I would not be surprised if you can optimise to a point you run hundreds or more of them per core. But it is hard to tell without a full picture. The condition of course is that everything run as much as possible using Burst.