Curious if there is some way to fix this pattern

You are only allowed to schedule a job if all native container fields are populated at the start of the job, however allocating on the main thread is slower than Allocator.Temp

For this reason, Temp allocations must be made inside the job, however that means you have to pass them as an explicit argument to every single function that needs to access them. When you have a lot of data structures going around, this can get pretty tedious and hard to read. It’s quite nice to have them as member variables of the job itself

The quick solution is to wrap the real job in a parent job, and then allocate that memory in the parent job

    [BurstCompile]
    public struct ComputeVerticesJobWrapper : IJob
    {
        [ReadOnly]
        private readonly NativeArray<float3> inputPoints;

        private readonly NativeHashMapList<int, foo> faces;

        private readonly float epsilon;

        public ComputeVerticesJobWrapper(NativeArray<float3> inputVertices, NativeHashMapList<int, foo> faces, float epsilonMultiplier = 1f)
        {
            inputPoints = inputVertices;
            this.faces = faces;
            epsilon = 0.0001f * epsilonMultiplier;
        }

        public void Execute()
        {
            int numVertices = inputPoints.Length;

            NativeList<bool> litFaces = new NativeList<bool>(3 * numVertices / 2, Allocator.Temp);
            NativeList<foobar> horizon = new NativeList<foobar>(numVertices, Allocator.Temp);
            NativeList<foo> openSet = new NativeList<foo>(numVertices, Allocator.Temp);

            ComputeVerticesJob computeVerticesJob = new ComputeVerticesJob(inputPoints, faces, openSet, litFaces, horizon, epsilon);
            computeVerticesJob.Execute();
        }
    }

I wonder if it would be possible to have some way to allow us to initialise these at the start of job execution rather than before the job. For example

[AllocatedAtJobBegin]
private NativeList horizon;

Allocations shouldn’t be the bottleneck though, why do you think it’s slow?

because I profiled it and the allocations are a bottleneck. We have to start many thousands of jobs based on users input

Temp is a faster allocator than TempJob as it is per-thread

I recall the saying “the fastest code is the code you don’t execute”: can’t you make persistent allocation instead? So you don’t pay for Temp/TempJob allocations at all?

About the title question: only thing you can do is to switch to UnsafeList<T> (via NativeList.GetUnsafeList()), which is not checked by jobs debugger for resources violations. It still does boundary checks etc. But obviously, it is less safe than NativeList (what can be acceptable for sake of performance).

persistent allocations are extremely slow, so we can’t use that. Unsafe could work, I suppose - though those are definitely by definition unsafe

I think the point was to allocate memory beforehand and reuse that, rather than allocating memory all the time in the jobs themselves.

I was struggling with this before but I found if you do this in the job struct then you can allocate the memory inside the execute method and it works fine.

        [NativeDisableContainerSafetyRestriction]
        private NativeArray<CellData> PotentialMoveCells;

Wonder what exactly the safety is that you loose for this particular usecase.

My guess is none if your using that array exclusively in that job and and thread. The safety checks as far as I’m aware are for race conditions and there’s no problems with that for this case.

1 Like

Race conditions and aliasing, from what I know too. So technically that should be a good solution.

Hmm, I thought this causes you to lose burst’s ability to assume things don’t alias causing it to be slower