First a general comment…
Why are you using a coroutine for this specifically?
It seems to me that:
Complicates the code
Makes intent less clear & readable
allocates GC memory
an Update() method with
if (!jobHandle.IsCompleted())
return;
Will do just fine… keep it simple… (Rant over…)
For the most likely cause of the spike when the job completes, you are scheduling a new iteration.
For this you are allocating a two new NativeArray with 1000000 elements. This means we
allocate 4mb of memory
clear all that memory with zeros by default
Allocating 4mb of memory is relatively cheap (compared to your 7ms spike - the size of the allocated memory generally doesn’t make it significantly more expensive)
However clearing 4mb of memory to zeros on my laptop takes around 1-2 ms for each array. (My laptop has a bandwidth per core of around 3GB/second) Fortunately there is an option on NativeArray where you can tell it to leave the memory uninitialized:
See NativeArrayOptions on the constructor. So essentially you allocate on main thread and then fill the memory on job. Skipping the cost of 2 on the main thread.
Naturally if you are not carefully and don’t gurantee to write to every element before reading them in your code you get randomly initialized memory.
Its not clear what your code is trying to achieve so potentially keeping the native array around instead of allocating it on each iteration would be desirable?
Btw. did you know about
Profiler.Begin(“Some block”);
…
Profiler.End();
Usually when i try to find out why something is slow, i put those around all the different places that might cause the performance hit. With coroutines you need to be careful not to cross yield boundaries with begin / end.
My best guess would be “lack of documentation”, which I expect to change, once UT officially announces the release of its new tech and provides all the documentation one needs to make the best use of it.
Because I’m lazy.
This example is silly but in the real thing I set up the jobs, schedule them then wait on the job to complete and after that I work on the result. For this sort of linear progression that yield to a result, coroutines are just easier.
In an update I would have to break it down in two if blocks, one for jobRunning==false (because a handle doesn’t have progress), the other one for handle.isCompleted.
Could we get a status on the handle? I’m thinking handle.progress, To mirror AsyncOperation. And even if calculating progress isn’t trivial (well… for the JobFor it is) jumping from 0 to 1 is good enough.
Also it would be great if you could rename handle.isCompleted to handle.isDone so it matches AsyncOperation.isDone, because I notice that the Unity API has many different words for the same thing and it is not ideal. (request.done I am looking at you)
I got rid of the allocation and now it is smooth - I have questions dependencies so I’ll open a new thread.
I am not sure I can agree. Done is slightly a different meaning to Completed. There is also the tense to worry about like wasCompleted vs isComplete and wasDone vs isDone.
I would probably recommend using .done or .finished instead.
Every API has some broken English in it though, not just Unity
So it would make sense to use either .done or .finished property for everything if everyone is looking for consistency across different parts of the API and doesn’t mind so much about chronological accuracy. I’m sure if we went down this rabbit hole then the work would never be isDone or isCompleted.
Experimenting with Jobs inside of coroutines myself. It’s a messy place to be, especially when you nest multiple coroutines. Thanks for sharing your example.
Working on a long-running IENumerator Start that does procedural world generation, which reports back to a loading screen as it goes. Wanted to speed it up, so users won’t have to wait so long for the world to generate. I looked to Jobs for help.
I like the pattern of schedule, yield wait while, then completed.
Regarding the Jobs system, FWIW, it seems easy to make mistakes regarding marshaling data back and forth without getting clear feedback you’re doing it wrong.
And sometimes it’s silence regarding the mistakes I was making meant I couldn’t differentiate between misallocated data and the variations programmed into the procedural generation.