Can I write to the same variable in each Execute(index)?

Is this a valid job:

    public struct TestJob : IJobFor
    {
        [ReadOnly]
        public NativeArray<int> Values;
        
        public NativeReference<int> Sum;
        
        public void Execute(int index)
        {
            Sum.Value += Values[index];
        }
    }

The IJobFor docs say that:

Each iteration must be independent from other iterations and the safety system enforces this rule for you. The indices have no guaranteed order and are executed on multiple cores in parallel.

I don’t know if the above job is valid, but I have an idea for how it might go wrong:

  • Two CPU cores try to write to the Sum.Value at the same time, and somehow the value gets all messed up. (not sure how tho)

If the above job is valid, what would make it invalid? Would reading Sum.Value and using it for something else in each Execute(index) be invalid?

link: Unity - Scripting API: IJobFor

The job has a race condition on the Sum variable when running with multiple threads. It’s basically the situation mentioned in the example here: Race condition - Wikipedia

For that job to be valid you would have to have some protection around the Sum variable, or work with it atomically.

The following piece of code is untested, but should work.

    public struct TestJob : IJobFor
    {
        [ReadOnly]
        public NativeArray<int> Values;
        
        public NativeReference<int> Sum;
        
        public void Execute(int index)
        {
            unsafe 
            {
                int* underlying = Sum.GetUnsafePtr();
                System.Threading.Interlocked.Add(ref UnsafeUtility.AsRef<int>(underlying), Values[index]);
            }
        }
    }

However, consider if you really want to do this in parallel. It might actually be faster to just do it as a job on a single core because then there’s less synchronization overhead between the different cores. Or alternatively, you can have multiple jobs working on subsets of the Values array (summing up say 256 values in a local variable) and adding atomically to the ‘Sum’ variable afterwards.

2 Likes

I’d say it’s 99.99% certain that iterating over the array and accumulating the sum of values on the main thread will be faster.

What is the overhead of Schedule() (or the other methods like Run or ScheduleParallel)?

Unity docs advise that generally having a few small jobs that each complete fast is better than one large job, because Execute() doesn’t yield, it runs the entire thing at once.

I haven’t found any mention of the potential overhead of scheduling jobs in the docs. The docs do mention that memcpy is used to copy the managed memory to native memory:

The job system uses memcpy to copy blittable types and transfer the data between the managed and native parts of Unity. It uses memcpy to put data into native memory when scheduling jobs and gives the managed side access to that copy when executing jobs.

Which makes me think that at least setting up the memory space for the job shouldn’t have a lot of overhead.

But if scheduling jobs does have a lot of overhead, what is causing it?

In all likelihood it is either constant or depends on your job (the memcpy part). And most likely it’ll depend on the system it’s running on.

It should be mainly householding things that add to the overhead. You will hardly be able to measure it with just a few jobs. But people have been spawning hundreds to thousands of jobs in an instant and then their whole app choked.

There is just no silver bullet, no golden rule. Just keep profiling your jobs if they don’t perform as well as you think they should.