If you’re comfortable with using or learning IJobChunk, shared components, and chunk components, you could try the following.
If the squad’s hive ID doesn’t change, you could declare a separate, ISharedComponentData component for the ID, so each chunk only has entities for one hive.
public struct SquadHiveID : ISharedComponentData
{
public int Value;
}
Then define a chunk component that’s added to each squad entity, say something like
public struct HiveChunkStats : IComponentData
{
public int Count;
}
The HiveChunkStats stores aggregate information, in this case just the sum of BeeSquad.Size for all entities in the chunk.
You would then write an IJobChunk that produces said aggregate information, something like this:
[BurstCompile]
internal struct UpdateHiveChunkStatsJob : IJobChunk
{
[ReadOnly] public ComponentTypeHandle<BeeSquad> BeeSquadTypeHandle;
public ComponentTypeHandle<HiveChunkStats> HiveChunkStatsTypeHandle;
public void Execute(in ArchetypeChunk chunk, int unfilteredChunkIndex, bool useEnabledMask, in v128 chunkEnabledMask)
{
NativeArray<BeeSquad> beeSquads = chunk.GetNativeArray(ref BeeSquadTypeHandle);
var en = new ChunkEntityEnumerator(useEnabledMask, chunkEnabledMask, chunk.Count);
int count = 0;
while (en.NextEntityIndex(out int i))
{
count += beeSquads[i].Size;
}
chunk.SetChunkComponentData(HiveChunkStatsTypeHandle, new HiveChunkStats { Count = count });
}
}
Then you end up with per-chunk information that describes a group of squads for one hive. From there, you can combine these in a single-thread IJobChunk.
[BurstCompile]
internal struct CombineHiveChunkStatsJob : IJobChunk
{
[ReadOnly] public ComponentTypeHandle<HiveChunkStats> HiveChunkStatsTypeHandle;
[ReadOnly] public SharedComponentTypeHandle<SquadHiveID> SquadHiveIDTypeHandle;
public NativeParallelHashMap<int, int> PopulationByID;
public void Execute(in ArchetypeChunk chunk, int unfilteredChunkIndex, bool useEnabledMask, in v128 chunkEnabledMask)
{
HiveChunkStats hiveChunkStats = chunk.GetChunkComponentData(HiveChunkStatsTypeHandle);
int hiveID = chunk.GetSharedComponent(SquadHiveIDTypeHandle);
PopulationByID[hiveID] += hiveChunkStats.Count;
}
}
The upshot of this is that you can parallelize part of the work (parallel first job), but as far as I know, NativeParallelHashMap still needs single-threaded updating if you want to update values instead of purely adding new map pairs (single-thread second job).