How can i know that a [GhostField] is dirty or not, so when it’s dirty, i can do my logic immediately like updating UI. Otherwise, i should do nothing for better performance.
Hey Hecocos! Netcode for Entities supports Entities Chunk Change Versions (fixed link), which will allow you to filter for components on chunks that have been modified by any systems.
However, because it’s simply one counter per component per chunk, you will get false positives. E.g.
- Ghost entities A, B and C are in the same chunk (Chunk1).
- B has a component (Foo) which gets modified on the server, and Netcode for Entities replicates this change (due to the [GhostField] attribute).
- When the client receives the snapshot informing said client of this change (to ghost entity B), the GhostUpdateSystem will bump the change version for component Foo on Chunk1. Thus, any queries that filter for changes on the Foo component will therefore iterate over entities A (false positive), B (true positive), and C (false positive).
Remember that the server will send updates for the entire chunk (Chunk1) in that snapshot (assuming there is room), so, in practice, if B has changed, more than likely A and/or C will have too.
If you want exact per-entity filtering, you can do the above plus the following:
public struct Foo
{
[GhostField] public T TheFieldYouWantToObserveChangesIn; // You're updating this value on the server.
public T _ClientCache_TheFieldYouWantToObserveChangesIn; // In a Client system, you compare this cache to the real value, then update UI etc when they diverge.
// Note: You can still use Chunk Change Filtering for this system, which will 'pre-pass', performing the majority of the filtering.
}
We currently do not support an built-in solution for raising queryable events on a ‘per-ghost, per-field value changed’ basis.
Thanks for your reply.
So in my conclusion:
- WithChangeFilter: this can ignore the chunks(Chunk2、Chunk3 …) that no component (Foo) has been changed. But the query result will still have A & C’s component(Foo) when i only changed [GhostFiled] of B’s component(Foo). It’s a rough filter.
2.Local cache: with twice cost of memory, and copy cost between [GhostField] and cache one.
Yep, exactly. Doing 1 should be more than sufficient, unless you’re triggering an exceptionally expensive operation.
OK!Thanks!I will try it.
Question about that: So when an ECS chunk has 20 ghost entities and only ghost components of one ghost entity changed of that chunk, all 20 entities are put into a snapshot from server to all clients? How does the delta compression work here, because 19 entities were not changed?
Correct, but with extremely heavy compression.
Note that if there are zero GhostField changes in a chunk:
- For Static ghosts, this chunk will not be added to the snapshot at all. The entire chunk is skipped.
- For Dynamic ghosts, this chunk will always be added to the snapshot. The entire chunk is always sent.
So, noting that we do send every entity in the chunk when one changes (assuming they fit), delta-compression here is extensive:
- We send a small entity header (ghostId, baseline counters, changemask, enabledmask). This header is aggressively delta-compressed, and thus typically in the region of <12 bits per entity.
- For any changed entity components containing GhostField’s, we send the delta-compressed changed values (using the changemask).
The NetDbg tool shows this nuance. If you take a look at the NetCube NetcodeSamples sample, you’ll note that, the more thin clients you have connected, the higher the bandwidth cost is to replicate only the one cube that is moving (as we’ll resend the other, unmoving cubes too, as they’re part of that same chunk).
To clarify, if only one ghost change in the chunk we do:
- the serialization pass to determine what is changed (we compute all the changemask and bla blah blah)
- change masks (that are probably all 0) are delta compressed against the previous baseline and thus, if was already 0 it will take 1bits to send (pretty much) every 32bits.
- if there are not changes for a ghost, the ghost data is not sent over or copied at all. The only data that is sent is the ghost id and the delta comprised changemask. The reason for that is that having a delta of 0 does not imply the ghost is not changed. It is just telling us that the current delta of the component in respect to the predicted baseline we have calculated is 0. Thus we still need to inform the client to update the ghost value (using the predicted baseline).
We are indeed do some extra work in case nothing is really changed (like a full pass serialization) but from a bandwidth perspective only the changed data are sent (with some unfortunately necessary data for the ghost ids).
We should have done thing slightly differently (and this is/was part of a feature we would probably likely to implement) that is to always use some sort of zero-change optimisation for even skipping both serialization and sending dynamic ghost if possible.