Hi,
I have trouble getting job dependencies working with an variable number of jobs.
This is what the general setup is now :
I have a source array src f.e. 10000 element → NativeArray(1000), readonly
I have a counts array counts 256 * numjobs → NativeArray(256*numjobs), write
I’s like to schedule numJobs jobs, parallel working on SubParts of these arrays
Job1 - src SubPart 0 - 999, count subpart 0 - 255
Job2 - src SubPart 1000 - 1999, count subpart 256 - 511
…
…
The jobs only work in their own subparts → no race conditions
It is a generic IJob struct, I defined it now as :
public partial struct RadixCount<T> : IJob where T : struct, IRadixSortableInt
{
[NativeDisableParallelForRestriction]
public NativeArray<int> myCounts;
[NativeDisableParallelForRestriction]
[ReadOnly] public NativeArray<T> mySrc;
[ReadOnly] public int keyOffset;
public void Execute()
{
for (int i = 0; i < mySrc.Length ; i++)
{
myCounts[(byte)(((math.asuint(mySrc[i].GetKey()) ^ 0x80000000) >> keyOffset) & 0x000000FF)] += 1;
}
}
}
Starting these jobs with :
public static class RadixMT
{
public static void RankSortInt<T>(NativeArray<int> ranks, NativeArray<T> src) where T : struct, IRadixSortableInt
{
const int sliceSize = 10000;
int count = src.Length;
int numThreads = (count / sliceSize) + 1;
NativeArray<int> counts = new NativeArray<int>(256 * numThreads, Allocator.TempJob, NativeArrayOptions.ClearMemory);
NativeArray<int> prefixSum = new NativeArray<int>(256 * numThreads, Allocator.TempJob, NativeArrayOptions.UninitializedMemory);
NativeArray<Indexer> frontArray = new NativeArray<Indexer>(count, Allocator.TempJob, NativeArrayOptions.UninitializedMemory);
NativeArray<Indexer> backArray = new NativeArray<Indexer>(count, Allocator.TempJob, NativeArrayOptions.UninitializedMemory);
NativeArray<JobHandle> handles = new NativeArray<JobHandle>(numThreads, Allocator.TempJob);
for (int t = 0;t<numThreads;t++)
{
handles[t] = new RadixCount<T> { mySrc = src.GetSubArray(t * sliceSize, math.min(sliceSize, src.Length - (t * sliceSize))),
keyOffset = 0,
myCounts = counts.GetSubArray(t * 256, 256)}.Schedule();
}
JobHandle.CombineDependencies(handles).Complete();
}
}
Also tried :
JobHandle radixCounts = new JobHandle();
for (int t = 0;t<numThreads;t++)
{
JobHandle j = new RadixCount<T> { mySrc = src.GetSubArray(t * sliceSize, math.min(sliceSize, src.Length - (t * sliceSize))),
keyOffset = 0,
myCounts = counts.GetSubArray(t * 256, 256)}.Schedule();
radixCounts = JobHandle.CombineDependencies(radixCounts, j);
}
radixCounts.Complete();
But getting errormessages stating :
InvalidOperationException: The previously scheduled job RadixCount`1 reads from the Unity.Collections.NativeList`1[EndPoint] RadixCount`1.mySrc. You must call JobHandle.Complete() on the job RadixCount`1, before you can write to the Unity.Collections.NativeList`1[EndPoint] safely
When not running parallel
for (int t = 0;t<numThreads;t++)
{
JobHandle j = new RadixCount<T> { mySrc = src.GetSubArray(t * sliceSize, math.min(sliceSize, src.Length - (t * sliceSize))),
keyOffset = 0,
myCounts = counts.GetSubArray(t * 256, 256)}.Schedule();
j.Complete();
}
It works OK.
What would be the correct way to handle the dependencies ?