[0.17] Question regarding performance, relevancy usage, and a potential bug

The setup for my project is approximate as follows:

I have a very large world, and the server spawns objects around each actor (Player and NPC), so the ghost can be sent to the player, and so that NPCs can query the world around them (AI).

This all works fine, but in stress testing I found issues. For a very large amount of ghosts spawned in a single frame (~10000), the game allocates infinite memory and crashes. After some debugging, this is the issue:

I haven’t looked into it extensively, but from what I gather this is what is happening:

  1. A datastream is requested from the driver, but the data doesn’t fit, so targetSnapshotSize is increased.
  2. Eventually, the requested size exceeds the maximum pipeline size, which seems to be 16kb.
  3. driver.BeginSend actually returns a Error.StatusCode.NetworkPacketOverflow, but the while loop never handles this.
  4. Even if driver.AbortSend is called, the loop continues, until the OS eventually kills Unity.

The thing that confused me about this, is that only a very tiny sub set of these ghosts are actually in the relevancy set for the player (~32 during a crash). However, it seems the server also needs to send a despawn message for all ghosts, even if they were never spawned on the client. This seems to be what is overflowing the buffer.

Now I have a few questions. Since I am still on 0.17, were any changes made in 0.50? I feel like this should at least throw some kind of error message as it took a bit to figure out what was going.

The other question is how do I work around this? My current setup for determining the relevancy set is using the physics world to get all entities close to a player controlled unit, and adding those to the relevancy set (with GhostRelevancyMode.SetIsRelevant).

Is this even feasible on my scale (due to the forced sending of a despawn message on connect)? An alternative I have thought about (but would require “extra” work), is let the server spawn all procedural objects around ALL units (player and NPC), but they aren’t ghosts. The client spawns all procedural objects around his own view. If a procedural object is modified from its initial state (e.g. a player damaged it, or it has some other property that needs to be synchronized), the the server destroys the procedural object and replaces it with a ghost that has the same values, and once the client receives this ghost, they delete their local copy, and replace it with the ghost. This would only be used for static objects (e.g. trees, harvestables), and not dynamic things like enemies (as they need server based AI).

While I think that the above approach would be feasible, I worry that this still wouldn’t work if the server loads a game from a save file with a lot of procedural entities that are already modified, and thus need to be spawned as ghosts. If a client connects, it would not have ACKED all of these entities, the server would send a despawn message for a lot of entities at once, overflow the buffer again, and then crash.

Any insight and help is appreciated as always! :slight_smile:

There are at least 2 changes in 0.50 related to this.

  1. We no longer require all the despawns to be sent the same frame. If there are too many despawns to fit we instead split them up over multiple snapshots which solves the problem of overflowing the packet size.

  2. 0.50 no longer sends despawns for entities which have never been sent to the client when they become irrelevant.

For this case it sounds like you want both of those to make it perform well and be safe in all cases

1 Like

Awesome, that sounds perfect! Thanks for the quick reply as always :slight_smile: