I’d like to suggest a possible feature request for a specialized version of IBufferElementData where if it’ll be in different chunks depending on if it has more than 0 elements. For example, a DynamicSharedBuffer with more than 0 elements can be queried and processed like so:
Entities
.WithElements<Enemy>()
.ForEach(
// logic to process enemies
).ScheduleParallel();
And buffers with 0 elements like so:
Entities
.WithEmpty<Enemy>()
.ForEach(
// logic to acquire enemies
).ScheduleParallel();
I know that querying in normal DynamicBuffers and checking their lengths with if statements, or using for loops exists, but should those predicates fail, the system would be executing, collecting entities, and scheduling jobs for nothing.
Also, is it possible to simulate this behavior with what ECS currently has to offer?
You can achieve this very same effect by just removing the IBufferElementData of all entities that have its DynamicBuffer with length equals 0, as in both cases it would cause the entity to move between chunks anyway.
As a follow-up question is using an XHasElementsComponent as a “gatekeeper” between systems that read and write to an XDynamicBuffer performant? If we were to imagine our ECS systems as a data flow graph, each “gatekeeper” reduces the number of entities queried for at each node. The nodes closer to origin nodes may process more entities, but the leaf nodes may process few or even no entities (in this case the system won’t even run!).
The best answer I can give to you on that topic is to test both and profile which ones work best for you, as just answering this question without knowing your entire context can lead to misleading information.
I understand you want to help prevent me from premature optimization. However, the optimization barely changes the codebase and is heavily supported by what I currently know about the scenario. At worst it’ll barely speed things up without mangling the codebase, at best it’ll drastically reduce the number of entities processed by systems further down the data flow graph. And there will be a lot of entities entering these expensive systems. What I don’t know is if this pattern is falls under good ECS practices or a hack that shouldn’t be widely used. Here’s the kind of code that I would add in each system that is part of the time-consuming data flow graph analogy.
// Gatekeep the length of allies
Entities
.WithChangeFilter<AllyProximityWithinRangeComponent>()
.ForEach(
(
Entity userEntity,
int entityInQueryIndex,
in DynamicBuffer<AllyProximityWithinRangeComponent> enemies
) =>
{
if (enemies.Length > 0)
{
ecbConcurrent.AddComponent(entityInQueryIndex, userEntity, new AllyProximityHasWithinRangeComponent());
}
else
{
ecbConcurrent.RemoveComponent<AllyProximityHasWithinRangeComponent>(entityInQueryIndex, userEntity);
}
}
).ScheduleParallel();
If this is considered bad practice then I can just delete these lines from the 10 or so systems that form the data flow graph.
Unity ECS is all about “Performance by Default”, so yes, my advice would be to not care for optimizations when you don’t even know if there is an issue yet.
About this code being ok: yes, this is totally fine. But I just remembered that they are gonna introduce some “Enabled” state for every component at some point , so my best suggestion for you is to just check if .Length > 0 and then branch out the execution for now (and in the future have that “gatekeeper” that will just enable/disable the buffer (instead of adding/removing it)