Two dimensional NativeArray

Hello everyone.

Im trying to make 2dimensional array inside job and using array like NativeArray[ ]

Job
            .WithReadOnly(cellCompType)
            .WithReadOnly(entityType)
            .WithDisposeOnCompletion(chunks)
            .WithCode(() => {
                for (int c = 0; c < chunks.Length; c++)
                {
                    var chunk = chunks[c];
                    var cellType = chunk.GetNativeArray(cellCompType);
                    var entities = chunk.GetNativeArray(entityType);
                    NativeArray<Entity>[] arr2d = new NativeArray<Entity>[chunk.Count];

                    for (int i = 0; i < chunk.Count; i++)
                    {
                        arr2d[i] = new NativeArray<Entity>(chunk.Count, Allocator.TempJob);
                    }

                    for (int i = 0; i < chunk.Count; i++)
                    {
                        arr2d[cellType[i].Position.x][cellType[i].Position.y] = entities[i];
                    }
                }
            }).Schedule();

And i got error:

(0,0): Burst error BC1054: Unable to resolve type `T. Reason: Unknown.`

at CellMatchSystem.<>c__DisplayClass_OnUpdate_LambdaJob0.Execute(CellMatchSystem.<>c__DisplayClass_OnUpdate_LambdaJob0* this)
at Unity.Jobs.IJobExtensions.JobStruct`1<CellMatchSystem.<>c__DisplayClass_OnUpdate_LambdaJob0>.Execute(ref CellMatchSystem.<>c__DisplayClass_OnUpdate_LambdaJob0 data, System.IntPtr additionalPtr, System.IntPtr bufferRangePatchData, ref Unity.Jobs.LowLevel.Unsafe.JobRanges ranges, int jobIndex)


While compiling job: System.Void Unity.Jobs.IJobExtensions/JobStruct`1<CellMatchSystem/<>c__DisplayClass_OnUpdate_LambdaJob0>::Execute(T&,System.IntPtr,System.IntPtr,Unity.Jobs.LowLevel.Unsafe.JobRanges&,System.Int32)
at <empty>:line 0

What im doing wrong?

As far I am aware, you can not have nativeArray in nativeArray in jobs.

But you can flatten/unflatten any number of dimensions.

1 Like

You can use normal array if you convert from (x,y) into index, given the width.
int index = x + y * width;

If it’s a 3x2 array, for example, its indices would be like so:

3 4 5 (1st row of width 3)
0 1 2 (0th row of width 3)

So where is (1, 1)? 1 + 1 * 3 = index 4.

6 Likes

As @Lo-renzo clarified, flattening allows you to use a single dimension for any number - i.e. the same holds for 3 dimensions, int index = x + (y * width) + (z * width * height);. Also, you can still have variable lengths by indexing in a header section, i.e. if you want 32 elements, each with any number of their own elements, maybe your first value would be 32, followed by 32 values storing the lengths of each (and possibly another 32 values with the offsets, in order to avoid the overhead of determining them by adding lengths… or just store offsets, but then you’re stuck calculating lengths instead, so unless you want to prioritize array size over processing speed, I’d store both offsets and lengths for variable-length nesting).

4 Likes

Specifically in this line:

NativeArray<Entity>[] arr2d = new NativeArray<Entity>[chunk.Count];

You’re defining a managed array of NativeArrays. Managed arrays are not supported by Burst - you can read about other Burst limitations here.

Flattening is the way to go, and it’s generally very useful to know how to implement this. You might also be interested in this repository which provides a NativeArray2D collection that will convert your indices automatically during access.

1 Like

I’ve used NativeHashMap with an integer key that is hashed from 2 parameters, in my case entity+someId.
With a NativeMultiHashMap I could work around the need of NativeArrays inside NativeArrays.

edit:
Anyone knows why it’s not possible? Isn’t it just a pointer?

You can not pass references int job. Like container inside container.
Also, there is problem of multi dimensional arrays sizes.
If they would be of different sizes, that would lead to massive performance drop.
I think similar topic was discussed not so long ago.

Suggest typical flattening array, should do the job.

You can’t nest NativeContainers inside other NativeContainers because otherwise the safety system would have to iterate through all elements of the outer NativeContainer to validate the safety of the inner containers on every access. That’s just awful to support, so Unity decided to not support it (and as someone currently writing custom containers, I’m glad they made that decision).

I have arrays that are holding subarrays with different sizes. What I used before was List<Data>[ ] which worked fine, but doing a double array in one with this kind of data feels a little bit wrong to me as I am always going to allocate the most I will ever need even though in reality through testing I hardly use more than 30% of it.

What can I do?

@Nyanpas You can use NativeArray<UnsafeList<T>>, if you’re willing to give up on some of the safety features.

1 Like

How unsafe can this be? Can this be used to hack a games console?

This is least of yor worries. Anyone having access to memory, can modify it anyway.

Unsafe here means, you have no safety net checks, in case of error, overflow, or race conditions.

1 Like

So as usual, then. Very nice. UwU

Regarding 2D array in your case, what size is for firs D, and how many elements they are for each second D array. Do day vary by much?

Is this something that you can represent by multiple Native arrays?
Or what you could do, if you really need save space, and compress data; you can again flattening 2D array into 1D array, and having secondary array, storing starting indexes and corresponding length of each second D array. So now you can navigate to any point of you array.

Another option could be, using entities and dynamic buffers, to store multi D arrays.

All sizes vary by a great deal, but I have upper limits on both regardless to avoid issues with memory allocation. I am currently using a 1D-array acting as a 2D-array with indexing but as stated I get about 30% use out of it, so would like to know if there are optimisation methods to find empty spaces and shrink the total array length down to only the places that are filled. The indexing will be as before except using the filled length of an index as the startoffset for the next part.

One of the ways could be, moving back of array, into empty space, as soon it is released.
So for example your array capacity is 10, you remove 5th element, and you move last element into 5th position. This way, you always have space on the back and no need for future iteration through the array, to look for empty gaps.

This could potentially save a bit on later iterations. Depending on use cases.

Of course you could run defragmentation algorithm.
What a I would try, is for example having IJobParallelFor, of the size of first D.
Using index offset, then iterate on threads only through second D (on flattened 1D array of course), and look up for empty gaps starting from front and move elements there from the back.
You would need also store new lengths.

Then if needed, in separate job, you can shift whole second array elements, for given 1D index, to co press data.

But question is, weather it is wort it go that extra step? Defrqgmentation sure, but data compression… Specially if you want to resize your arrays again later?
How much memory your array consumes?

These arrays are created and used at the same time as the rest of the game logic happens, so during run-time in real-time. Since they are meant to be used to create meshes for large cities and lots of different structures, they tend to be large in size, but it all depends as stated above. Anything I can do to keep memory usage low is what I strive for.

The lifetime of an array is of course the whole duration of the “session” so reuse is obviously going to happen, but it feels overkill to size it for a 40-floors tall office complex when it is mostly going to be used for small individual homes. I could perhaps split them into “size ranges” and use several.

Thx for an explanation.
To me, it rally looks, like you want to use entities with DynamicBufffers instead.
For example entity per building and then you have your buffer inside, of desired size.

1 Like

One additionnal option I suggested in another post related to vertices, uvs and such and that seem to work for that user is to use a native stream.

If you consider that every Data structure you write to a different index of a native stream and fore each of those index (so each 1st dimension of the 2Darray of data) you write the folowing (pseudo code):

nativeStream.wirte<int>(someArray.count)
foreach(float3 value in someArray){
nativeStream.write<flaot3>(value )
}

Then you effectivly stored a 2D array with virable length 2nd D.