[ECS design problem] Concurrently processing independent consistent strict subsets of IComponentData

In normal use cases, you’d use IJobParallelFor to process entities concurrently on multiple cores. However, I know IJobParallelFor does not allow explicit control of how entities are split when processed (only hinting). Is there a way to process consistent strict subsets of entities on multiple threads (explicitly control batching somehow)? I cannot currently do this with IJob because multiple jobs cannot access the same IComponentData simultaneously:

//note this example does not work but shows what I want to do with ECS

LOD1: IJob {
         Data data;
         public void Execute()
         {
                //process this same subset every frame
                 for(int i = 0; i < data.stuff.Length; i++) {
                          stuff = DoSomething();
                 }
         }
}

LOD2: IJob {
         Data data;
         public void Execute()
         {
                //process this same subset every frame
                 for(int i = 0; i < data.stuff.Length; i++) {
                          stuff = DoSomethingElse();
                 }
         }
}
//---------------------------------------OnUpdate-----------------------------------------
//Schedule multiple different independent subsets of "data" using LOD1 and LOD2 concurrently

^This code is not possible with ECS. How would I achieve an equivalent behavior?

I cannot currently do this with IJob because multiple jobs cannot access the same IComponentData simultaneously:

They can if it’s reading only, mark it as [ReadOnly]

If you’re writing you’ll have to double buffer or something.

Sounds cool. How do you double buffer?

Not only. You can manually disable safety restriction, just use [NativeDisableContainerSafetyRestriction] attribute for job input data declaration.

1 Like

Also you can use IJobParallelForBatch in some cases.

private struct TestJob : IJobParallelForBatch
        {
            public NativeArray<int> Data;

            public void Execute(int startIndex, int count)
            {
                var end = startIndex + count;
                for (var i = startIndex; i < end; i++)
                {
                    int val = Data[i];
                }
            }
        }

        protected override void OnUpdate()
        {
            NativeArray<int> data = new NativeArray<int>(50, Allocator.TempJob);
            for (int i = 0; i < data.Length; i++)
            {
                data[i] = i;
            }

            TestJob job = new TestJob()
            {
                Data = data
            };

            job.ScheduleBatch(50, 10).Complete();
        }
1 Like

Thanks for helping me out. Can you explain IJobParallelForBatch? I couldn’t find much documentation.

“Entities with the same SharedComponentData are grouped together in the same chunks. The index to the SharedComponentData is stored once per chunk, not per Entity. As a result SharedComponentData have zero memory overhead on a per Entity basis.” - Unity ECS Documentation
https://github.com/Unity-Technologies/EntityComponentSystemSamples/blob/master/Documentation/content/ecs_in_detail.md

I was wondering if you can reliably rely on entities with the same ISharedComponentData to be batched together (when using IJobParallelForBatch). Is this rule strictly reinforced?