I’m working on a procedural open-world game and I need a way to keep track of what areas have been generated and are already in memory. The pattern I have in mind for this is:
Native hashmap contains loaded chunk coordinates as keys
Entites with observeChunk component check the hashmap for all chunks they should be able to see. If chunk doesn’t exist, add chunk coordinates to dynamic buffer
Various world generation systems iterate over dynamic buffer and add whatever entities and components they need to populate the new chunk
Newly added chunks are added to hashmap and buffer is cleared
This all feels very ECS, except for the hashmap that just kind of exists in the background here. Where do I put it? It could go in a sharedComponentData, except it needs to be updated regularly and there is a lot of overhead there (copying the entire hash map each time to a new chunk instead of just updating it). It could live in a system, but then it’s hard to interact with outside of that system (thinking about serialization here).
Am I missing something or is this just a bad pattern to begin with?
I’m playing with the same ideas, and currently I ask myself why chunks?
We use them primarly for generating a subset of the world, and to iterate over a smaller subset of entities during play.
Why not free us from them, and use a secondary world for generation and offloading objects based on size and distance. Systems working on the secondary world can take longer time without messing with the active one, which is the subset we actively work on around the player.
Even in a secondary world, I still need to know which areas the player has visited so the game doesn’t generate the same content twice. I can’t completely unload things far from the player and regenerate them later because it isn’t a static world and things will need to continue to be simulated even if the player isn’t nearby.
So to get back to your original question about native container inside a system and serialization concerns:
Keeping a native hashmap inside a system is not that bad. Initialization and disposing becomes easy because it can just conform to your system’s lifecycle. Getting access to another system is also straightforward with World.GetExistingSystem(). This is also fairly unit-testing-friendly and refactoring-friendly should you decide to move things around.
If you need to serialize in any way, be editor friendly, or keep an intermediate data form, then I’d recommend to keep the native container in a ScriptableObject. You just need to do a bit of safe keeping in the lifecycle methods. Getting access to the ScriptableObject from Systems will depend on your own setup. For me, it’s easily done through Dependency Inversion (I do my own custom world initialization to take care of constructor injection when adding systems to the world).
With a ScriptableObject, would you have the service work on that object directly (managed memory) or copy it into the service when you load the game and copy it back out for serialization before saving? The player isn’t the only entity capable of generating new chunks and there could be dozens or hundreds of entities checking the hashmap per frame (read only), so I would like to keep this job-friendly as much as possible.
For example, I have a TypeIndexer ScriptableObject that serializes a list of Net Types that need to be sync’ed over the network. It also lazily prepares the NativeHashMap equivalent so that Job systems can use:
[CreateAssetMenu(menuName = "Misc/NetTypeIndexer")]
public class NetTypeIndexer : ScriptableObject, INetTypeIndexer {
[SerializeField] List<Type> netTypes = new List<Type>(); // I'm serializing this using Odin in the actual code
NativeHashMap<ComponentType, byte> nativeHashmap;
bool hashmapInited = false;
public NativeHashMap<ComponentType, byte> GetNativeHashmap() {
if (!hashmapInited) {
nativeHashmap = new NativeHashMap<ComponentType, byte>(netTypes.Count, Allocator.Persistent);
for (int i = 0; i < netTypes.Count; i++) {
nativeHashmap.TryAdd(new ComponentType(netTypes[i]), (byte)i);
}
hashmapInited = true;
}
return nativeHashmap;
}
...
}
Example JobComponentSystem that uses the HashMap:
public class ItemPickUpSystem : JobComponentSystem {
INetTypeIndexer typeIndexer;
NativeHashMap<ComponentType, byte> typeHashMap;
public ItemPickUpSystem(INetTypeIndexer typeIndexer) { // <- Injected through Dependency Manager
this.typeIndexer = typeIndexer;
}
protected override void OnCreate() {
typeHashMap = typeIndexer.GetNativeHashmap();
}
protected override JobHandle OnUpdate(JobHandle inputDeps) {
var handle = new MyJob()
{
ComponentTypeToNetTypeIndex = typeHashMap,
}.ScheduleSingle(this, inputDeps);
return handle;
}
...
}
Hmm. Does that copy the hashmap every frame into the job or does it work on the hashmap directly? I’m not sure how the native containers work under the hood. Edit: It looks like it’s just a wrapper for a pointer so it shouldn’t actually copy the data.
Maybe I can do something with Entities where I have each entity be a chunk and have the system create the hashmap from those entities when it’s created and just manually keep them synced when new ones are added. Then I can use the hashmap for lookups of a specific chunk while still being able to iterate over the chunks for serialization and whatever else.