In a past conversation on Pooling in DOTS, a question was raised about whether it could be useful to manually “reserve” X amount of chunks for a given archetype, even if there are currently no entities in the world fitting that archetype. The idea is that it could be beneficial in cases where we would rapidly spawn huge numbers of entities that might despawn just as rapidly (imagine bullet hells, vfx, etc…), and it would basically act as a “pooling” solution to avoid constantly re-allocating chunks that we know we’re gonna need in advance.
So, 2 questions:
Is it possible?
Is it a good idea and/or would it even be worth it?
I think I would go at this from a different angle. For the stated problem domain pooling isn’t even a common primary design aspect. It might be significant but it’s still secondary.
Data access patterns are think still the main concern for scaling something like that. So partitioning is common, and you might very we use caching strategies there, but the partitioning is actually the key part.
Or like our projectile system I use a single NativeArray for projectiles in flight. I use a pre allocated stack to assign indexes into that array for new projectiles. Pop the stack, process projectiles, push the index back on the stack when completed (short version). Yes I’m reusing the array, but that’s not really the key part of the design. Now we only designed for around 1k in flight max. If I had to do this at larger scale I would partition somehow, caching would more or less just be baked in.
Chunks themselves are a form of partitioning for a specific purpose. Optimizing specific features I think it’s likely they have their own non overlapping concerns. I would use chunks where their natural usage fits the problem. My starting point would be what is an appropriate design. Then I would look at how does that integrate with ECS. Find the natural point where they intersect.
Internally the chunks are already pooled but because we always use the same size chunks we can share across different archetypes. The only type of pooling that would accelerate anything if most of the data was already setup and instantiate wouldn’t have to be called at all.
In practice if you using batching operations to instantiate entities, performance is quite good and ~1000x better than with game objects. So it’s doubtful if its necessary. I suggest you profile your specific case at scale…
You could set up a pool of entities with all the right ComponentData but add the Disabled component to all of them. Then when you need them you could activate them by removing the Disabled component.
Removing/Adding a component on a query (Possibly with a SharedComponentFilter) is extremely fast.
Removing/Adding a component on a specific amount of entities would also be very fast in a job with a concurrent EntityCommmandBuffer.