In my game I have a number of periodic jobs with extremely long runtimes (3-8 seconds with Burst when running on a single thread w/ highly optimized code). All of the jobs are configured as IJobParallelFor, which works wonderfully at cutting their execution down to under 200 ms on 24 cores…
However I’m getting a side effect now of a very noticeable frame drops as other job systems (physics/animation/HDRP/etc) end up calling complete on the main thread for their jobs, resulting with my whale of a job being flushed to the main thread so theirs can be un-backlogged from the working queue and finished immediately.
I was wondering if there was anyway to manually specify the max number of cores you could dedicate to a particular IJobParallelFor, or perhaps for another variant like IJobParallelForBatch? (I’m not interested in setting the global number of total working threads with JobsUtility.JobWorkerCount – I just want to know if it’s possible to reserve like 1 or 2 of the threads my monster is gobbling up for the poor physics engine.)
There are two ways I can theorize going about this:
A) Manually split up my job into (JobsUtility.JobWorkerCount - 1) * IJob(s) and then build a system to track each job individually / reassemble the output data after – which is possible, but is also a lot of extra overhead for something I think should be fairly simple.
or to do this more indirectly by
B) Specifying the minimum job batch size to be (iteration count) / (JobsUtility.JobWorkerCount - 1) … Which doesn’t flair my interest too much as that basically stops any ability for the job system to balancing work loads by stealing work from other threads…
I’m also not sure if this method would prevent the backlog issue if another (medium length) job already had it’s work scheduled/queued up before the built-in systems (physics, etc.) called complete on their jobs… Seems like I would be just kicking the can down the road in terms of limiting traffic to prevent the bottleneck… While also potentially doubling the length of my job’s total execution if just one of the “giant” job chunks happened to get queued after the first batch of (JobsUtility.JobWorkerCount - 2) threads.