Here, youâre scheduling the job without any dependencies (youâre ignoring the systemâs Dependency property) and storing the JobHandle. You should schedule your job like this instead:
state.Dependency = new MyJob().Schedule(state.Dependency);
It isnât usually a great practice to create a separate job for each entity individually. I think the job scheduling system is designed around the idea of doing work in batches. For example, you can use IJobEntity to schedule a single job that iterates over all entities, without copying the entities into temporary arrays. If you want to process entities from an array manually like youâre doing right now, you can schedule a single IJobParallelFor job instead.
Also note that you donât need an array of EntityCommandBuffers - you can use a single EntityCommandBuffer and write all commands to it. In fact, maybe you donât even need to create any command buffers if you just use one of the pre-defined ones (e.g. EndSimulationEntityCommandBufferSystem runs at the end of the simulation group). That should be more efficient, because you can get rid of the JobHandle.CompleteAll stall and let your jobs run without waiting.
Thank you for your helpful reply. you can explain a bit about System Dependency and IEntityJob.
i know System Dependency but I dont understand know to it work.
I think in one frame all System will Run and Complete in the frame. Even when i call MyJob.Schedule() then MyJob will Run and Complete in that frame.
When i use IEntityJob i see it only run in one thread.
if i have 3 CubeTest Component
It run like this.
1,2,3,4⌠1,2,3,4⌠1,2,3,4
It seem have to wait previous job complete before run other job.
Unity puts your entities in chunks. Each chunk can fit up to 16kb of memory, or up to 128 entites, whichever is lower.
IJobEntity internally uses IJobChunk, and IJobChunkâs parallelism is over chunks, not entities. This means that if all of your entities fit in a single chunk, all entities will be processed by a single thread. This isnât a huge problem in practice - 99% of jobs are very fast, and thanks to efficient cache usage processing a whole chunk of entities can be almost as fast as processing a single entity individually!
When your game gets larger and thereâs more/bigger entities, the workload should be spread across all cores automatically. If you really need to, you can force your entities to live in different chunks, e.g. using shared components.
Also, I recommend using the profiler instead of logs for observing job behavior, itâs much more convenient and you can do it at any time without modifying your code. It looks like this:
This is true, but for this to work correctly you need to assign your jobs to the Dependency, and include the Dependency when scheduling your jobs, so that all of the jobs scheduled during the frame form a long chain. The game calls Complete() on the dependency at the end of the frame to ensure that all of the jobs in the chain have finished running.
thank your reply. i change to use IJobParallelFor. it work perfectly. and i also change to using
EndSimulationEntityCommandBufferSystem and remove JobHandle.CompleteAll().
but you can explain abit about innerloopBatchCount in IJobParallelFor.Schedule.
document say
Batch size should generally be chosen depending on the amount of work performed in the job. A simple job, for example adding a couple of Vector3 to each other should probably have a batch size of 32 to 128. However if the work performed is very expensive then it is best to use a small batch size, for expensive work a batch size of 1 is totally fine.
I can alway set innerloopBatchCount = 1 to it alway run on all thread.
Or something like innerloopBatchCount = entities.Length / TotalThread (but i dont know to get this value)
As far as I know, the innerloopBatchCount parameter is something you should tweak once you have a good idea about how heavy the workload is, and how many indices (entities) it runs on.
A small innerloopBatchCount means each worker âstealsâ a small number of iterations to execute. This is OK for jobs that do a lot of work (e.g. lots of math), but stops being efficient when your job is lightweight.
This is because scheduling and executing jobs has an overhead. It doesnât matter much for most jobs, but when a job does barely any computation, as in the example of âadding a couple of Vector3â, the job system might be wasting a lot of time compared to the amount of actual work itâs doing. In that case, a bigger innerloopBatchCount will let the worker threads execute more job iterations before it needs to talk to the job system to âstealâ the next scheduled workload.
I usually just use a value of 8 for heavy jobs, 64 for lightweight jobs, put in a // todo and worry about it later. I donât think you can predict the optimal setting at the beginning of the production, before your game has a representative number of entities, and without measuring/profiling the changes. So donât worry about it too much! It doesnât make such a huge performance difference, and you can change it later without breaking anything.