Adding values of entites based on id

Hello,

I recently started learning ecs and I am trying to simulate beehives
one Beehive entity spawns around 200 BeeSquad entities
What’s the best way to determine the population of bees in a beehive?
so far I have following code, it’s working, but it’s single threaded.
Is there a way to make it multithreaded? is it good idea to connect BeeSquads to beehives via Int id

public struct Beehive : IComponentData
{
    public int Id;
 ...
}
public struct BeeSquad : IComponentData
{
    public int HiveId;
    public int Size;
    public int AgeInTicks;
}

The PopulationByID HashMap already has all the keys.

public partial struct CalculatePopulation : IJobEntity
{
    public NativeParallelHashMap<int, int> PopulationByID;
    public void Execute(in BeeSquad beeSquad)
    {
        PopulationByID[beeSquad.HiveId] += beeSquad.Size;
    }
}

It’s totally fine to have an ID on the squad to relate it back to the hive. You could also do the inverse, the hive keeping track of its squads as it spawns and as they are destroyed (perhaps via cleanup components), but your idea works just fine. It all depends on the requirements of the rest of your simulation really.

1 Like

Yes there is, but it is complicated and makes the CPU do more total work during the frame. So it is only worth doing if the single-threaded job is taking too much time in the profiler. If you have a profile capture showing it taking too much time, I can walk you through how to parallelize it. But otherwise, it is best to not worry about it. There’s probably bigger performance problems in your project better worth your time.

1 Like

If you’re comfortable with using or learning IJobChunk, shared components, and chunk components, you could try the following.

If the squad’s hive ID doesn’t change, you could declare a separate, ISharedComponentData component for the ID, so each chunk only has entities for one hive.

public struct SquadHiveID : ISharedComponentData
{
    public int Value;
}

Then define a chunk component that’s added to each squad entity, say something like

public struct HiveChunkStats : IComponentData
{
   public int Count;
}

The HiveChunkStats stores aggregate information, in this case just the sum of BeeSquad.Size for all entities in the chunk.
You would then write an IJobChunk that produces said aggregate information, something like this:

[BurstCompile]
internal struct UpdateHiveChunkStatsJob : IJobChunk
{
    [ReadOnly] public ComponentTypeHandle<BeeSquad> BeeSquadTypeHandle;
    public ComponentTypeHandle<HiveChunkStats> HiveChunkStatsTypeHandle;

    public void Execute(in ArchetypeChunk chunk, int unfilteredChunkIndex, bool useEnabledMask, in v128 chunkEnabledMask)
    {
        NativeArray<BeeSquad> beeSquads = chunk.GetNativeArray(ref BeeSquadTypeHandle);
        var en = new ChunkEntityEnumerator(useEnabledMask, chunkEnabledMask, chunk.Count);
        int count = 0;
        while (en.NextEntityIndex(out int i))
        {
            count += beeSquads[i].Size;
        }
        chunk.SetChunkComponentData(HiveChunkStatsTypeHandle, new HiveChunkStats { Count = count });
    }
}

Then you end up with per-chunk information that describes a group of squads for one hive. From there, you can combine these in a single-thread IJobChunk.

[BurstCompile]
internal struct CombineHiveChunkStatsJob : IJobChunk
{
    [ReadOnly] public ComponentTypeHandle<HiveChunkStats> HiveChunkStatsTypeHandle;
    [ReadOnly] public SharedComponentTypeHandle<SquadHiveID> SquadHiveIDTypeHandle;
    public NativeParallelHashMap<int, int> PopulationByID;

    public void Execute(in ArchetypeChunk chunk, int unfilteredChunkIndex, bool useEnabledMask, in v128 chunkEnabledMask)
    {
        HiveChunkStats hiveChunkStats = chunk.GetChunkComponentData(HiveChunkStatsTypeHandle);
        int hiveID = chunk.GetSharedComponent(SquadHiveIDTypeHandle);
        PopulationByID[hiveID] += hiveChunkStats.Count;
    }
}

The upshot of this is that you can parallelize part of the work (parallel first job), but as far as I know, NativeParallelHashMap still needs single-threaded updating if you want to update values instead of purely adding new map pairs (single-thread second job).

1 Like

Thanks for reassuring me, I was doubting whether it was some egregious error.

The simulation is very simple for now, so this class was actually taking most of the time in frame, but that’s partially because I instantiate a bunch of new beeSquads, and that’s the next thing I’ll try to parallelize.
I’m not fixated on optimization; I’m just trying to learn all that stuff.

Thank you a lot. I read up on these components. Although you already wrote jobs, for me it was still challenging to implement, but I somehow managed.
It had a slight improvement in fps in the scenario I was testing.
Shared component seems extremely useful. Previously I was unsure how to use it. But it’s great, and now I can easily filter by HiveID, instead of iterating over every beeSquad, and selecting when HiveId matched.
This solution also gives me more flexibility, and the ability to spawn more beeSquads with smaller size. If needed, I’ll fallback to the old code for smaller simulations.

Thansk for all the replies.