There are some excellent points being made here, I really appreciate the guidance from the two of you. It certainly makes sense to centralize state transitions. I’m worried I’ve mischaracterized my original issue with a faulty metaphor.
Let’s say in this pizzeria, there is a phenomenon in which anything reaching a certain temperature inside the oven explodes, including calzones and anything else I may be using the pizza oven for.
In my project, I have a component called ExplodeTrigger which gets added to all explodable things. The component gets picked up in a different system entirely which reacts to explosions in a number of ways, none of which concern the pizza oven. This is a state which presumably need not concern our PizzaState.
I could have all explodable things already have an ExplodeTrigger which gets enabled later on. But presumably there would be quite a few archetypes now, especially as I enable more and more things to explode.
This barely sounds like a fringe use case, but maybe I have the wrong architecture entirely? Would it make more sense to instead invert the relationship and have the Explosion system query all explodable things always and check a flag to determine whether they need exploding? In my head this feels wrong because the system would be doing a lot of work for (potentially) nothing.
I’m not quite sure what the new question is. Are you asking whether “ExplodeTrigger” should be:
A component that gets added to an Entity to indicate that it should explode this frame, or
An enableable component whose presence means than an Entity can explode, but which is disabled until it’s time for the explosion to actually happen?
If that’s the question, I’d say go with the second option. You avoid the problem of introducing many new archetypes, you avoid the performance cost of the structural changes from adding the ExplodeTrigger component at runtime, and all it costs you is 16 bytes of memory per chunk and a small performance hit in your “MakeStuffExplode” System, or whatever you call it.
What I’m saying is: Yes, do this. Make the flag you check be the whether or not the component is enabled. In cases where there’s nothing about to explode in a chunk, the job will skip over the whole chunk in a single check. In the (I guess pretty rare) cases where everything in a chunk should explode, that’s maximally efficient as well. In other cases, as the guide says:
A system/query being (at worst) 2x slower because it uses enableable components compared to one that doesn’t is a drop in the ocean compared to the cost of the structural changes you’d likely need to make in order to avoid that cost. If it turns out your MakeStuffExplode system is actually a performance bottleneck, the code can always be refactored: perhaps instead of enabling components you just populate some NativeArray with the entities to be exploded in a given frame and have MakeStuffExplode iterate over that instead of running queries - although it’s conceivable that the cache misses you’d introduce with that approach might make things worse.
Honestly, life’s too short to fret over every system unless the profiler tells you that you need to. Software development is always about compromises, and it’s simply not possible to make every single system in a non-trivial ECS project all operate at maximum efficiency. So you design for the common case, focus on the hot code paths, and do whatever’s going to move your project forward now, even if you have to revisit and optimize your solution later. DOD is about building something that’s comparatively easy to optimize compared to OOP, but it’s not about agonizing over every decision trying to get everything maximally optimal first time, because that’s basically impossible.
Don’t try to force even thing into one component. You need a way to query entities in the oven. This is different from a state machine. You could use an entity query or a physic query. Generally, when stuff is exploding, I am using a physic query and then setting states.
You could add an enable component called InOven. This kind of solution will not scale well, imagine you have ten or twenty ovens. I would keep the state machines separate from your spatial partitioning. You may need to make an optimized data structure for spatial partitioning. The oven is a place. Baking is a state. You can be in the oven and not baking. For a simple oven game, you could use a NativeMultiHashMap or a couple NativeList.
For additional context, I’m at the final stage of my game, where I’m experiencing dots telling me I’m using too much archetype memory (which is baffling as the Archetypes editor window reveals I’m using significantly less than the high water mark). I think I have a general idea for where to invest in next, thanks a lot for taking the time to clarify what was a big question mark in my head!
Warning: This is a pretty speculative response. Please take it with a pinch of salt, and correct me if you know I’m wrong.
I’m not super familiar with the archetype memory handling, but certainly in older versions of Entities, every time a new archetype was encountered at runtime it would have to register itself in a data structure of all known EntityQueries in order to quickly identify which archetypes (and therefore which chunks) matched a particular query, so every archetype would occupy some memory for the rest of the runtime duration even if it was only ever used once.
I don’t know whether or not that’s still the case with Entities 1.0, but if it is, perhaps those messages are warning about the high water mark rather than the number of archetypes that are actually present at any given moment?
Oh god, that sounds highly plausible. There’s a thread on it over here with no resolve. And yes I’m on Entities 0.51.
I’ve been slowly removing all runtime prefab conversion (which also happens to be a prerequisite for upgrading to 1.0), with the hope that registering prefab archetypes at buildtime may alleviate my memory problems. But if indeed stale/unused archetypes never get cleaned up I’ll only be prolonging the issue and not solving it.
Now I’m really curious if also upgrading to 1.0 will actually solve things.