Are all jobs scheduled in one frame needed to complete in the same exact frame? If no, then what happens with native containers passed to them with Allocator.TempJob?
If my memory serves me correctly, if you create a native container with temp allocation then yes, you can schedule a job and let it run in the background without completing it but after 4 frames Unity will actually dispose of the data for you. You may have noticed you get errors saying a native collection hasn’t been disposed resulting in a memory leak but Unity doesn’t just leave the memory leak there on its own it’s actually disposed of for you.
This means if your job is still running after 4 frames it’ll end up pointing to a deallocated container.
Your best bet is to use persistent allocation alongside the DeallocateOnJobCompletionAttribute in your job.
This explains so much wow … can you be a bit specific here on how to fix this? I have alot of long running jobs and always wondered why nothing is really working as expected.
Use Allocator.Persistent if you need to run for more than 4 frames
And do not use JobHandle.CombineDependencies and it will be fine.
What, why?
As a side note, DeallocateOnJobCompletionAttribute
seems like very poor design. It’s the caller’s responsibility to decide if the buffer is still in use or could be freed. You can get the same effect with sticking Dispose at the end of a JobHandle dependency chain. Eg instead of…
struct MyJob : IJob
{
[DeallocateObJobCompletion] public NativeArray<int> array;
// ...
}
new MyJob { array = array }.Schedule();
The equivalent, “cleaner” code is:
struct MyJob : IJob
{
public NativeArray<int> array;
// ...
}
JobHandle handle = new MyJob { array = array }.Schedule();
array.Dispose(handle); // array will be disposed as soon as the handle is completed
ok how is DeallocateObJobCompletion going to work?
My job fills some native arrays over a long period of time… when the job completes, the arrays are correctly filled and i can check them out. I had set the native arrays already to persistent, but now the native arrays are getting disposed on job completion …
so what is the correct way of scheduling jobs long term, then get the data out of the job, then deallocating the whole thing?
I am doing something similar. Just check if filling process is done, if yes then use filled array and dispose it afterwards.
Because it uses NativeArrays with TempJob allocator under the hood. So after 4 frames you will get a bunch of warnings, that TempJob allocations were older, than 4 frames.
So how should i combine dependencies in cases where jobs might run longer than 4 frames?
Only chains of jobs, like:
var dep2 = job2.Schedule(dep1);
var dep3 = job3.Schedule(dep2);
I didn’t try to passing persistent NativeArray into JobHandle.CombineDependencies, may be it’ll work too.
It not using NativeArrays\Slices by self under the hood, only if you pass them as arguments then their pointer will be used on C++ side. It does not allocate any new native arrays\slices. If you pass handles to CombineDependencies as NativeArray\Slice after calling CombineDependencies you can (and should) immediately dispose array as combining happens on main thread and this array\slice just utility (which even can and should be allocated with Temp allocator for better performance) and wouldn’t be used after calling CombineDependencies and absolutely not related to long-running job or not.
2 votes - not a good marker, not even confirmed, I’m 100% sure these 2 people just messed other things not disposed other native arrays. As simple proof - jobs living more than 4 frames - 10 seconds! And this test I running in our production project where many other jobs, systems doing their work and combine dependencies.
Again - CombineDependencies do nothing with native collections itself. Looking at source code - enough to see that. It’s only getting read-only pointer from passed array\slice of handles and use handles from an array. Moreover, CombineDependencies doing that right at a time when it’s called, it’s synchronous main thread call and we safely can dispose allocated by ourselves utility array of handles right after calling CombineDependencies as it’s not using for anything after that.
Here is very simple example:
using System.Collections;
using System.Threading;
using Unity.Jobs;
using UnityEngine;
public struct HeavyJob : IJob {
public void Execute() {
Thread.Sleep(3000);
}
}
public struct Job : IJob {
public void Execute() {}
}
public class TestJobs : MonoBehaviour {
private JobHandle handle;
// Start is called before the first frame update
void Start() {
var heavyJob = new HeavyJob().Schedule();
var job1 = new Job().Schedule(heavyJob);
var job2 = new Job().Schedule(heavyJob);
handle = JobHandle.CombineDependencies(job1, job2);
StartCoroutine(Waiting());
}
private IEnumerator Waiting() {
yield return new WaitUntil(()=> Time.frameCount > 60);
handle.Complete();
Debug.Log("done -y");
}
}
Console output:
Internal: JobTempAlloc has allocations that are more than 4 frames old - this is not allowed and likely a leak
Internal: deleting an allocation that is older than its permitted lifetime of 4 frames (age = 12)
done -y
Tested on Unity 2019.4.7
I don’t know if this is the case for later versions.
Looks like it is still not working in 2020.1 JobHandle.CombineDependencies causing TempJob allocation warnings
Yes, i have still the same problem.
So I’m wondering… if in the first frame say 5 jobs get scheduled, each depending on the previous… if one or more of those jobs are slow, it’s possible some or all of them could finish in the next frame… right? Or is there some mechanism that keeps them from spanning frames and must be disabled to allow threads to cross into the next frame?
The only two things that stops jobs from traversing across frames is either that they finish before then, or something on the main thread calls Complete(). If you suspect otherwise, the culprit is probably the later done by some mechanism you aren’t aware of but is avoidable to some degree.