[BurstCompile] Span<T> support

Is there any plans to have support for Span family.
For instance ReadOnlySpan throws an unsupported exception.

Full stack:

Unexpected exception Burst.Compiler.IL.CompilerException: Unexpected exception —> Burst.Compiler.IL.CompilerException: Error while processing function System.Void ECSNet.TestSystem/NetworkEventJob::Execute(ECSNet.NetworkEvent&) —> Burst.Compiler.IL.CompilerException: Error while processing variable System.ReadOnlySpan1<ECSNet.CMDConnect> var.0;---> System.NotSupportedException: The managed class typeSystem.Pinnable1<ECSNet.CMDConnect> is not supported by burst
at Burst.Compiler.IL.ILVisitor.CompileType (Mono.Cecil.TypeReference typeReference, Burst.Compiler.IL.Syntax.GenericContext genericContext, System.Collections.Generic.HashSet1[T] structBeingVisited, Burst.Compiler.IL.Syntax.TypeUsage typeUsage) [0x002d6] in <37bebafd236f4ccd943dc039a926a017>:0 at Burst.Compiler.IL.ILVisitor.CompileStruct (Mono.Cecil.TypeDefinition typeDefinition, Mono.Cecil.TypeReference typeReference, Burst.Compiler.IL.Syntax.GenericContext genericContext, System.Collections.Generic.HashSet1[T] structBeingVisited, Burst.Compiler.IL.ILVisitor+TypeCacheKey typeCacheKey, Burst.Backend.TypeHandle& ret) [0x00130] in <37bebafd236f4ccd943dc039a926a017>:0
at Burst.Compiler.IL.ILVisitor.CompileType (Mono.Cecil.TypeReference typeReference, Burst.Compiler.IL.Syntax.GenericContext genericContext, System.Collections.Generic.HashSet`1[T] structBeingVisited, Burst.Compiler.IL.Syntax.TypeUsage typeUsage) [0x0029a] in <37bebafd236f4ccd943dc039a926a017>:0
at Burst.Compiler.IL.ILVisitor.CompileType (Mono.Cecil.TypeReference typeReference, Burst.Compiler.IL.Syntax.GenericContext genericContext, Burst.Compiler.IL.Syntax.TypeUsage typeUsage) [0x00006] in <37bebafd236f4ccd943dc039a926a017>:0
at Burst.Compiler.IL.ILVisitor.CreateLocalVariableImpl (Burst.Compiler.IL.Syntax.ILLocalVariable variable, Mono.Cecil.TypeReference variableType) [0x00015] in <37bebafd236f4ccd943dc039a926a017>:0
at Burst.Compiler.IL.ILVerifier.CreateLocalVariableImpl (Burst.Compiler.IL.Syntax.ILLocalVariable variable, Mono.Cecil.TypeReference variableType) [0x00024] in <37bebafd236f4ccd943dc039a926a017>:0
at Burst.Compiler.IL.ILVisitor.CreateBackendLocalVariable (Burst.Compiler.IL.Syntax.ILLocalVariable variable) [0x00027] in <37bebafd236f4ccd943dc039a926a017>:0
at Burst.Compiler.IL.ILVisitor.PreProcessInstructions () [0x00020] in <37bebafd236f4ccd943dc039a926a017>:0
— End of inner exception stack trace —
at Burst.Compiler.IL.ILVisitor.PreProcessInstructions () [0x00047] in <37bebafd236f4ccd943dc039a926a017>:0
at Burst.Compiler.IL.ILVisitor.ProcessFunctionBody (Burst.Compiler.IL.Syntax.ILFunction function) [0x000fd] in <37bebafd236f4ccd943dc039a926a017>:0
at Burst.Compiler.IL.ILVisitor.VisitPendingFunctions () [0x0000e] in <37bebafd236f4ccd943dc039a926a017>:0
— End of inner exception stack trace —
at Burst.Compiler.IL.ILVisitor.VisitPendingFunctions () [0x00033] in <37bebafd236f4ccd943dc039a926a017>:0
at Burst.Compiler.IL.ILVisitor.VisitEntryPointFunction (Burst.Compiler.IL.MethodReferenceWithHash methodReference) [0x00066] in <37bebafd236f4ccd943dc039a926a017>:0
at Burst.Compiler.IL.ILVisitor.VisitEntryPointFunction (Burst.Backend.Module module, Burst.Compiler.IL.MethodReferenceWithHash methodReference) [0x0001a] in <37bebafd236f4ccd943dc039a926a017>:0
at Burst.Compiler.IL.ILVerifier.VisitEntryPointFunction (Burst.Backend.Module module, Burst.Compiler.IL.MethodReferenceWithHash methodReference) [0x00000] in <37bebafd236f4ccd943dc039a926a017>:0
— End of inner exception stack trace —
at Burst.Compiler.IL.ILVerifier.VisitEntryPointFunction (Burst.Backend.Module module, Burst.Compiler.IL.MethodReferenceWithHash methodReference) [0x0001f] in <37bebafd236f4ccd943dc039a926a017>:0
at Burst.Compiler.IL.ILHash.CompileHash (Burst.Backend.Module module, Burst.Compiler.IL.MethodReferenceWithHash methodReference) [0x00000] in <37bebafd236f4ccd943dc039a926a017>:0
at Burst.Compiler.IL.NativeCompiler.ComputeHash () [0x000ea] in <37bebafd236f4ccd943dc039a926a017>:0
at Burst.Compiler.IL.Jit.JitCompiler.CompileMethod (Mono.Cecil.MethodReference methodReference, Burst.Compiler.IL.Jit.JitOptions jitOptions) [0x000aa] in <37bebafd236f4ccd943dc039a926a017>:0
at Burst.Compiler.IL.Jit.JitCompilerService.Compile (Burst.Compiler.IL.Jit.JitCompilerService+CompileJob job) [0x002b2] in <37bebafd236f4ccd943dc039a926a017>:0

While compiling job: System.Void Unity.Entities.JobProcessComponentDataExtensions/JobStruct_Process12<ECSNet.TestSystem/NetworkEventJob,ECSNet.NetworkEvent>::Execute(Unity.Entities.JobProcessComponentDataExtensions/JobStruct_Process12<T,U0>&,System.IntPtr,System.IntPtr,Unity.Jobs.LowLevel.Unsafe.JobRanges&,System.Int32)

Code:

        [BurstCompile]
        public struct NetworkEventJob : IJobProcessComponentData<NetworkEvent>
        {
            public void Execute([ReadOnly] ref NetworkEvent data)
            {
                    ReadOnlySpan<CMDConnect> readOnlySpan;
                    unsafe
                    {
                        readOnlySpan = new ReadOnlySpan<CMDConnect>((void*)data.Data, 1);
                    }
            }
        }

Had to dig up my older post on this here: NativeHashMap

The short version:

Burst expects and restricts memory access to direct memory allocations made with Unity’s NativeContainers. Span is an abstraction that could represent multiple allocations, treating them as a seemingly linear structure. It has more in comment with ComponentDataArray (which is being deprecated in favor of chunk iteration and regular NativeContiainer access), than any other structure. The thing about burst is unity’s types and idioms are built with Burst in mind (and burst is mainly aware of specific types it can optimize).

On top of this, Burst has the extra caveat of being unable/unsafe to access managed memory. Span can represent both a managed allocation AND an native allocation, meaning Burst cannot safely assume the job threads have sole access and it would likely cause runtime issues if the compiler didn’t complain about it first.

2 Likes

Thank you for the detail explanation. I ended up with a BurstCompile job however putting [BurstDiscard] in the scope when Span gets created.

1 Like

I have a question for you.
Do you know why creating a NativeArray allocates GC? Is there any way to avoid such things?

    protected override JobHandle OnUpdate(JobHandle inputDeps)
        {
            NativeArray<CMDSend> commandSend = new NativeArray<CMDSend>(1, Unity.Collections.Allocator.TempJob);

            commandSend.Dispose();
            return inputDeps;
        }

Please note, all of the data is bittable inside a struct CMDSend.

Are you profiling this in editor?

It should not create garbage at runtime. You should not really bother profiling garbage in editor because there are a lot of safety checks that produce garbage that don’t exist once you build.

2 Likes

Thank you.

@recursive one thing that I’ve noticed. You’ve said that NativeArray is more cache friendly than Span. I would say it depends on Allocator that you use. If you’d allocate the memory for Span by Marshal.AllocHGlobal it will be just allocated on the heap. However, there are cache-friendly allocator that allows to work with CPU cache. GitHub - nxrighthere/Smmalloc-CSharp: Blazing fast memory allocator designed for video games meets .NET

Hmm interesting, though to be fair any linear array allocation is going to be cache friendly compared to an object graph. I’ll take a look at this, there’s some REST clients I’m involved with that might benefit from it.

1 Like

@tertle , @recursive can I actually make NativeArray to act like Span? So lets say to allocate a NativeArray get its pointer and after some time convert the pointer back to NativeArray to get the data?

I was looking for GetUnsafeReadOnlyPtr and NativeArrayUnsafeUtility.ConvertExistingDataToNativeArray, however I am getting null exception when trying to read from converted array.

You need to also call NativeArrayUnsafeUtility.SetAtomicSafetyHandle to setup safety handles when using ConvertExistingDataToNativeArray

Some random example.

            managedData = new Vector4[1023];
            handle = GCHandle.Alloc(managedData, GCHandleType.Pinned);
            var ptr = handle.AddrOfPinnedObject().ToPointer();
            Data = NativeArrayUnsafeUtility.ConvertExistingDataToNativeArray<float4>(ptr, managedData.Length, Allocator.Invalid);
#if ENABLE_UNITY_COLLECTIONS_CHECKS
            NativeArrayUnsafeUtility.SetAtomicSafetyHandle(ref Data, AtomicSafetyHandle.Create());
#endif

I much appreciate that.
It worked. However it doesn’t let me to Dispose the array that I’ve just converted.

The original allocator needs to be the one to dispose it.

Speaking of,

Why are you converting it back? You already have a NativeArray that points to the same memory location that needs to be tracked so you can dispose it.

I use it to allow inter thread communication. So basically I have non blocking Queues of pointers. Instead of having a Queue of NativeArrays. But still I have to figure out how can I dispose the array after I’ve converted it from the existing data.

            var array = new NativeArray<TestMessage>(1, Unity.Collections.Allocator.TempJob);
            array[0] = new TestMessage { TestInt = 25 };

            unsafe
            {
                var converted = NativeArrayUnsafeUtility.ConvertExistingDataToNativeArray<TestMessage>(array.GetUnsafeReadOnlyPtr(),
                    1, Unity.Collections.Allocator.Invalid);
#if ENABLE_UNITY_COLLECTIONS_CHECKS
                NativeArrayUnsafeUtility.SetAtomicSafetyHandle(ref converted, AtomicSafetyHandle.Create());
#endif

                converted.Dispose(); // causes exception
            }

            array.Dispose(); // is fine

You may also want to look into NativeSlice and it’s related utilities. I’ve been able to use it for “temp” NativeArray-ish behavior until the feature arrives in burst sometime later. Plus several of the utilties like Sort() are already supported for it. That way you can allocate a large array and pass the native slices around, recycling the memory as needed. You could even back it with a NativeList. That way all you have to do is worry about which slices are tracked for which processing and then only allocate/deallocate the memory in one place.

1 Like

I am not use if this code is safe to use. But however it works. It turns that even though you call Dispose, the actual data can be acessed after.

        protected override void OnCreateManager()
        {
            base.OnCreateManager();

            var array = new NativeArray<int>(1, Unity.Collections.Allocator.TempJob);
            array[0] = 25;

            unsafe
            {
                var pointer = (IntPtr)array.GetUnsafeReadOnlyPtr();
                array.Dispose();

                Output(pointer);
            }

        }

        private void Output(IntPtr data)
        {
            unsafe
            {
                var converted = NativeArrayUnsafeUtility.ConvertExistingDataToNativeArray<int>(data.ToPointer(),
       1, Unity.Collections.Allocator.Invalid);

#if ENABLE_UNITY_COLLECTIONS_CHECKS
                NativeArrayUnsafeUtility.SetAtomicSafetyHandle(ref converted, AtomicSafetyHandle.Create());
#endif

                Debug.Log(converted[0]);
            }
        }
    }

There’s a decent chance the memory could be reclaimed by another thread once it’s been released:

  • The NativeArray and friends use the same basic memory arena that the non-exposed Jobs use.
  • Once NativeContainer temp allocations are allowed in jobs and Burst jobs, all bets are off.

It’s working for now because you’re accessing that memory immediately on the main thread in the editor, which is slower and has more safety checks involved but there’s no guarantee this won’t blow up in the near future once more API changes drop.

I’d still look into NativeSlice to pass a chunk of memory off to a queue to be worked on.

I don’t feel like that’s safe. It can probably be allocated again by unity and overwritten.[/code]

            var array = new NativeArray<TestMessage>(1, Unity.Collections.Allocator.TempJob);
            array[0] = new TestMessage { TestInt = 25 };

            unsafe
            {
                var converted = NativeArrayUnsafeUtility.ConvertExistingDataToNativeArray<TestMessage>(array.GetUnsafeReadOnlyPtr(),
                    1, Unity.Collections.Allocator.Invalid);
#if ENABLE_UNITY_COLLECTIONS_CHECKS
                NativeArrayUnsafeUtility.SetAtomicSafetyHandle(ref converted, AtomicSafetyHandle.Create());
#endif

                converted.Dispose(); // causes exception
            }

            array.Dispose(); // is fine

You don’t need this line
converted.Dispose(); // causes exception
You’re not allocating anything with ConvertExistingDataToNativeArray so it’s not up to this array to handle the memory.
You just need to call array.Dispose.

But the real point is. You dont need this bit at all.

            unsafe
            {
                var converted = NativeArrayUnsafeUtility.ConvertExistingDataToNativeArray<TestMessage>(array.GetUnsafeReadOnlyPtr(),
                    1, Unity.Collections.Allocator.Invalid);
#if ENABLE_UNITY_COLLECTIONS_CHECKS
                NativeArrayUnsafeUtility.SetAtomicSafetyHandle(ref converted, AtomicSafetyHandle.Create());
#endif

                converted.Dispose(); // causes exception
            }

Why create an array you already have. Just use array again after you manipulate the pointer.

I should mentioned that it’s Single producer single consumer communication of C# Threads, not jobs one. So that means One thread allocates memory and the memory accessed only by Another thread. I might read the data in a job, however, main thread creates the data and a job thread reads it then some sort of EndFrameSystem deallocates the memory.

Why do I bother with all of these pointers is the approach that I have. It’s for the networking. I have a serialization / deserialization C# thread. Once a network message got deserialized it pushes the Event with a pointer that points to the deserialized data. The Event is pushed into the main thread. Then main thread creates an entity with IComponentData called NetworkPacket that contains an Opcode (enum) and Data (IntPtr). The data inside the IntPtr is all bittable. So I need to somehow convert it back to get the deserialized data. I was doing the Span approach and it worked flawlessly when it was just Non burst jobs. However, now I’ve started to have Burst compiler exceptions and even [BurstDiscard] attribute for reading doesn’t work.

You’re missing what I’m trying to say.

Option 1. Convert native array to span.

Create NativeArray array with new() and an allocator
Get pointer of NativeArray, pointer = array.GetUnsafeReadOnlyPtr()
Use pointer to create Span
Manipulate SpanT
Access the changes in array.
Dispose array.

i.e. NativeArrayT → SpanT

Option 2. Convert span to native array.

Create a Span
Manipulate span
Create array using NativeArrayUnsafeUtility.ConvertExistingDataToNativeArray from the SpanT pointer.
No need to dispose, memory is handled by managed side.

i.e. SpanT → NativeArrayT

What you’re doing is Convert native array to span back to native array.
NativeArrayT (A) → SpanT → NativeArrayT (B)
where A and B are identical except B can’t dispose of the memory.

1 Like