I’m new to Unity Jobs and have made some really good progress - I’ve definitely seen massive performance improvements using it, but there are a couple things that I am unsure of.
1: I’d like to have nested native arrays. Is there a way to get this working and keep things multi-threaded? I specify that I want to keep it multi-threaded because I read that NativeLists don’t work multithreaded and you cannot use them in a JobParallelFor. I wanted to create a struct with native arrays inside it, but I understand that it causes an uncertainty in amount of memory it takes. I also read about using native hashmaps - I failed in my attempt to use this, but it was early on in my attempts with Unity Jobs, so I could give it another shot. Atm, I have a 2d array implemented in a regular NestedArray, but it only works if the amount of data is always consistent across every interation within the JobParallelFor. I can cheat this by always assuming the max size, but then I’m allocating memory that I don’t need.
2: I am running a small set of jobs every frame that take care of moving all characters in the game. I have it working with Allocator.TempJob, but I was wondering if I should be using Allocator.Persistent instead. I keep allocating and deallocating the same amount of space every frame - it will sometimes change size, but it will be based on user input and I believe I could save some performance if I can avoid all of this constant allocation. Perhaps only when the number of moving units changes, I can deallocate and reallocate persistent memory again - unless I can do an allocation modification? From my digging around, I believe I should be looking into Unity.Collections.LowLevel.Unsafe?
Nope. This is not a thing that exists nor will exist in the near future (and possibly the distant future).
Not entirely true. You can use them in certain ways in parallel jobs. They are one of the best ways to handle dynamic memory in a job chain.
Do you know what the sizes of each array are going to be before they are populated? If so, you can create a single array of data and another smaller array of int2 where x is the start offset index and y is the count. Then you can use GetSubArray to make your code even cleaner.
I wouldn’t worry about this right now unless it is a major performance issue. Unity has plans to fix this. While there are benefits of using persistent allocations to save allocation operations, there is also the benefit of reusing temporary memory to reduce overall memory usage and also keep the temporary memory in cache. Someone from the DOTS team made a post recently about how they were looking to drastically improve the latter case and make those allocations of temporary memory a ridiculously small number of instructions. In the meantime, just use what works.
You can just write your own datastructures and do anything you like, but you will lose memory leak protection as DisposeSentinel is not allowed in ComponentData. You will face random crashes and memory leaks if you go down this road.
There are some very annoying C# restrictions when it comes to generics and pointers, see IntPtr and UnsafeUtility for ways around those. You can also use UnsafeList (use UnsafeUtility.Read/WriteArrayElement), UnsafeList<> and UnsafeHashMap<> or just copy datastructures and remove the things you dont need, or see if these work for you. If you will be making heavy use of pointers you may consider maintaining variants of datastructures that take and return pointers to their generic type and store them as IntPtr. Finally also be super careful not to make mutable struct fields readonly, calling methods on such a field will silently copy the struct and not write it back…
That would be great, I switched to persistent allocations and saw a massive speedup in one particular application. Afaik there is no nice way of providing long lived data of variable length on thread basis (for fixed size you can allocate one big array then use thread index and NativeSlice<> to provide chunks). I use NativeSetThreadIndex attribute and nestable structures of some kind to get the job done.
The same applies to associating longlived variable length data with entities beyond dynamicbuffers. I have not tried this yet myself, but you are allowed to put pointers in componentdata . Allocations and cleanup should be easily handled from System State Components
something like this, still can’t use NativeContainers though so leaving yourself open to memory leaks, but at least now its contained to not forgetting to call Dispose once:
using Unity.Collections;
using Unity.Collections.LowLevel.Unsafe;
using Unity.Entities;
using UnityEngine;
public class NewBehaviourScript : MonoBehaviour
{
void Start()
{
var world = World.All[0];
var em = world.EntityManager;
var e = em.CreateEntity();
em.AddComponentData(e, new Data());
}
}
class DoAThingSystem : SystemBase
{
protected override void OnUpdate()
{
Entities
.WithBurst()
.ForEach((Data data, Resources resources) =>
{
for (int i = 0; i < resources.Foos.Length; i++)
Debug.Log($"{resources.Foos[i]}");
})
.ScheduleParallel();
}
}
class ResourcesSystem : SystemBase
{
EndSimulationEntityCommandBufferSystem _ecbSource;
protected override void OnCreate()
{
_ecbSource = World.GetExistingSystem<EndSimulationEntityCommandBufferSystem>();
}
protected override void OnUpdate()
{
var buffer = _ecbSource.CreateCommandBuffer().AsParallelWriter();
Entities
.WithBurst()
.WithNone<Resources>()
.ForEach((Entity entity, int entityInQueryIndex, in Data data) =>
{
var resources = new Resources();
resources.Allocate();
buffer.AddComponent(entityInQueryIndex, entity, resources);
})
.ScheduleParallel();
_ecbSource.AddJobHandleForProducer(Dependency);
buffer = _ecbSource.CreateCommandBuffer().AsParallelWriter();
Entities
.WithBurst()
.WithNone<Data>()
.ForEach((Entity entity, int entityInQueryIndex, Resources resources) =>
{
resources.Dispose();
buffer.RemoveComponent<Resources>(entityInQueryIndex, entity);
})
.ScheduleParallel();
_ecbSource.AddJobHandleForProducer(Dependency);
}
protected override void OnDestroy()
{
Entities
.WithBurst()
.ForEach((Resources resources)
=> resources.Dispose())
.Run();
}
}
struct Data : IComponentData
{
}
struct Resources : ISystemStateComponentData
{
public UnsafeList<int> Foos;
public void Allocate()
{
Foos = new UnsafeList<int>(8, Allocator.Persistent);
Foos.Add(23);
}
public void Dispose()
{
Foos.Dispose();
}
}
What is mean is the innerLoopBatchCount. Unless I misunderstand what it is, I cannot access any array values that are not within the subset of the provided nativearrays. So if each ‘group’ of data is 40 long, I need innerLoopBatchCount== 40, but if each group has varying lengths, it won’t work.
EDIT: but yes, I know the lengths before starting the job
You can use [NativeDisableParallelForRestriction] on the array of data but not on the smaller int2 array. If you pack them together in a struct with [NoAlias] as private fields and provide an API for getting the sub array, you should have full index thread safety.
That’s very interesting. I get how that allows me full access to my packed 2d array. Two questions though.
1:NativeDisableParallelForRestriction is clear, but what exactly does NoAlias achieve?
2:So I can use this struct as a NestedArray<TestStruct> inside of a job? In that case, I don’t care about packing the array because each index inside of a JobParallelFor will have its own array anyways. I’m assuming that this is not how thing works though.
It isn’t necessary. It lets Burst optimize a little more aggressively. You can read up on the Burst documentation to learn more.
Nope. It serves as a replacement for the nested array. You said you knew the sizes of all the nested arrays in advance. You put all those sizes into a NativeArray of int and pass that to the constructor. Then you can fill and access each nested array using GetArray().