How to run jobs in parallel that produce separate outputs?

Hello! I am developing a game that generates terrain procedurally in chunks. I am writing a job that will generate the chunk data. I would like to run the job in parallel for multiple chunks at a time but the outputted data is persistent and is referenced later after the job is complete. IJobParallelFor jobs seem to require that the output is one big array but I need the outputs to be separate and it would probably cause a lot of garbage collection to copy the output from the big data array to the individual chunk data. Is there another way to run jobs in parallel where the output data can be separate?

Why would that cause GC allocs?

If multiple jobs are totally independent, they can run parallel to each other.

1 Like

I’m saying that I would need to take the big output data array from the IJobParallelFor and then allocate the separate data arrays for each chunk and then dispose the big output data array.

Because each chunk has it’s own data and they are destroyed and created on the fly their data needs to be separate.

So if I just use a regular IJob (not IJobParallelFor) and schedule a bunch of them at the same time, that would also utilize all cores / job work threads?

NativeArrays don’t allocate GC, except in the Editor as a debugging feature. So yes, copying from one giant buffer into a bunch of smaller buffers is perhaps a memcpy you don’t need. But it isn’t causing GC.

Potentially. Though there are other ways to screw it up. Try it out and see if it does what you want. And if not, come back with what you have and we’ll try to help you fix whatever issue arises.

1 Like