In normal use cases, you’d use IJobParallelFor to process entities concurrently on multiple cores. However, I know IJobParallelFor does not allow explicit control of how entities are split when processed (only hinting). Is there a way to process consistent strict subsets of entities on multiple threads (explicitly control batching somehow)? I cannot currently do this with IJob because multiple jobs cannot access the same IComponentData simultaneously:
//note this example does not work but shows what I want to do with ECS
LOD1: IJob {
Data data;
public void Execute()
{
//process this same subset every frame
for(int i = 0; i < data.stuff.Length; i++) {
stuff = DoSomething();
}
}
}
LOD2: IJob {
Data data;
public void Execute()
{
//process this same subset every frame
for(int i = 0; i < data.stuff.Length; i++) {
stuff = DoSomethingElse();
}
}
}
//---------------------------------------OnUpdate-----------------------------------------
//Schedule multiple different independent subsets of "data" using LOD1 and LOD2 concurrently
^This code is not possible with ECS. How would I achieve an equivalent behavior?
Also you can use IJobParallelForBatch in some cases.
private struct TestJob : IJobParallelForBatch
{
public NativeArray<int> Data;
public void Execute(int startIndex, int count)
{
var end = startIndex + count;
for (var i = startIndex; i < end; i++)
{
int val = Data[i];
}
}
}
protected override void OnUpdate()
{
NativeArray<int> data = new NativeArray<int>(50, Allocator.TempJob);
for (int i = 0; i < data.Length; i++)
{
data[i] = i;
}
TestJob job = new TestJob()
{
Data = data
};
job.ScheduleBatch(50, 10).Complete();
}
I was wondering if you can reliably rely on entities with the same ISharedComponentData to be batched together (when using IJobParallelForBatch). Is this rule strictly reinforced?