While there is NativeHashmap there does not seem to be a NativeHashSet? Is there going to be one eventually or would I need to implement it myself (or am I missing some aspect why it is not needed?)
I was wondering the same thing yesterday.
You could use a NativeHashMap as a NativeHashSet. Have <T,byte> or <T,bool> as your type arguments. When adding an element T t do either hash.TryAdd(t,0) or hash.TryAdd(t,true). The second type argument should be something small to minimize wasted space.
I made a NativeHashSet a while ago with some improvements with clearing compared to NativeHashMap, Iām not sure if it still works without errors as it was made months ago and havenāt retested it with the latest APIs. This was the lastest git node I had with it:
NativeHashSet.cs
using System;
using System.Runtime.InteropServices;
using Unity.Collections;
using Unity.Collections.LowLevel.Unsafe;
namespace NativeContainers {
[StructLayout(LayoutKind.Sequential)]
[NativeContainer]
public unsafe struct NativeHashSet<T> : IDisposable where T : struct, IEquatable<T> {
[NativeDisableUnsafePtrRestriction] NativeHashSetData* buffer;
Allocator allocator;
#if ENABLE_UNITY_COLLECTIONS_CHECKS
AtomicSafetyHandle m_Safety;
[NativeSetClassTypeToNullOnSchedule] DisposeSentinel m_DisposeSentinel;
#endif
public NativeHashSet(int capacity, Allocator allocator) {
NativeHashSetData.AllocateHashSet<T>(capacity, allocator, out buffer);
this.allocator = allocator;
#if ENABLE_UNITY_COLLECTIONS_CHECKS
DisposeSentinel.Create(
out m_Safety, out m_DisposeSentinel, callSiteStackDepth:8, allocator:allocator);
#endif
Clear();
}
[NativeContainer]
[NativeContainerIsAtomicWriteOnly]
public struct Concurrent {
[NativeDisableUnsafePtrRestriction] public NativeHashSetData* buffer;
[NativeSetThreadIndex] public int threadIndex;
#if ENABLE_UNITY_COLLECTIONS_CHECKS
public AtomicSafetyHandle m_Safety;
#endif
public int Capacity {
get {
#if ENABLE_UNITY_COLLECTIONS_CHECKS
AtomicSafetyHandle.CheckReadAndThrow(m_Safety);
#endif
return buffer->Capacity;
}
}
public bool TryAdd(T value) {
#if ENABLE_UNITY_COLLECTIONS_CHECKS
AtomicSafetyHandle.CheckWriteAndThrow(m_Safety);
#endif
return buffer->TryAddThreaded(ref value, threadIndex);
}
}
public int Capacity {
get {
#if ENABLE_UNITY_COLLECTIONS_CHECKS
AtomicSafetyHandle.CheckReadAndThrow(m_Safety);
#endif
return buffer->Capacity;
}
}
public int Length {
get {
#if ENABLE_UNITY_COLLECTIONS_CHECKS
AtomicSafetyHandle.CheckReadAndThrow(m_Safety);
#endif
return buffer->Length;
}
}
public bool IsCreated => buffer != null;
public void Dispose() {
#if ENABLE_UNITY_COLLECTIONS_CHECKS
AtomicSafetyHandle.CheckDeallocateAndThrow(m_Safety);
DisposeSentinel.Dispose(ref m_Safety, ref m_DisposeSentinel);
#endif
NativeHashSetData.DeallocateHashSet(buffer, allocator);
buffer = null;
}
public void Clear() {
#if ENABLE_UNITY_COLLECTIONS_CHECKS
AtomicSafetyHandle.CheckWriteAndThrow(m_Safety);
#endif
buffer->Clear<T>();
}
public Concurrent ToConcurrent() {
Concurrent concurrent;
concurrent.threadIndex = 0;
concurrent.buffer = buffer;
#if ENABLE_UNITY_COLLECTIONS_CHECKS
concurrent.m_Safety = m_Safety;
#endif
return concurrent;
}
public bool TryAdd(T value) {
#if ENABLE_UNITY_COLLECTIONS_CHECKS
AtomicSafetyHandle.CheckWriteAndThrow(m_Safety);
#endif
return buffer->TryAdd(ref value, allocator);
}
public bool TryRemove(T value) {
#if ENABLE_UNITY_COLLECTIONS_CHECKS
AtomicSafetyHandle.CheckWriteAndThrow(m_Safety);
#endif
return buffer->TryRemove(value);
}
public bool Contains(T value) {
#if ENABLE_UNITY_COLLECTIONS_CHECKS
AtomicSafetyHandle.CheckReadAndThrow(m_Safety);
#endif
return buffer->Contains(ref value);
}
public NativeArray<T> GetValueArray(Allocator allocator) {
#if ENABLE_UNITY_COLLECTIONS_CHECKS
AtomicSafetyHandle.CheckReadAndThrow(m_Safety);
#endif
var result = new NativeArray<T>(Length, allocator, NativeArrayOptions.UninitializedMemory);
buffer->GetValueArray(result);
return result;
}
}
}
NativeHashSetData.cs:
using System;
using System.Runtime.InteropServices;
using System.Threading;
using Unity.Collections;
using Unity.Collections.LowLevel.Unsafe;
using Unity.Jobs.LowLevel.Unsafe;
using Unity.Mathematics;
using UnityEngine.Assertions;
namespace NativeContainers {
[StructLayout(LayoutKind.Sequential)]
public unsafe struct NativeHashSetData {
byte* values;
byte* next;
byte* buckets;
int valueCapacity;
int bucketCapacityMask;
// Adding padding to ensure remaining fields are on separate cache-lines
fixed byte padding[60];
fixed int firstFreeTLS[JobsUtility.MaxJobThreadCount * IntsPerCacheLine];
int allocatedIndexLength;
const int IntsPerCacheLine = JobsUtility.CacheLineSize / sizeof(int);
public int Capacity => valueCapacity;
public int Length {
get {
int* nextPtrs = (int*)next;
int freeListSize = 0;
for(int tls = 0; tls < JobsUtility.MaxJobThreadCount; ++tls) {
int freeIdx = firstFreeTLS[tls * IntsPerCacheLine] - 1;
for(; freeIdx >= 0; freeListSize++, freeIdx = nextPtrs[freeIdx] - 1) {}
}
return math.min(valueCapacity, allocatedIndexLength) - freeListSize;
}
}
static int DoubleCapacity(int capacity) => capacity == 0 ? 1 : capacity * 2;
public static void AllocateHashSet<T>(
int capacity, Allocator label, out NativeHashSetData* buffer) where T : struct {
#if ENABLE_UNITY_COLLECTIONS_CHECKS
if(!UnsafeUtility.IsBlittable<T>())
throw new ArgumentException($"{typeof(T)} used in NativeHashSet<{typeof(T)}> must be blittable");
#endif
var data = (NativeHashSetData*)UnsafeUtility.Malloc(
sizeof(NativeHashSetData), UnsafeUtility.AlignOf<NativeHashSetData>(), label);
int bucketCapacity = math.ceilpow2(capacity * 2);
data->valueCapacity = capacity;
data->bucketCapacityMask = bucketCapacity - 1;
int nextOffset, bucketOffset;
int totalSize = CalculateDataSize<T>(capacity, bucketCapacity, out nextOffset, out bucketOffset);
data->values = (byte*)UnsafeUtility.Malloc(totalSize, JobsUtility.CacheLineSize, label);
data->next = data->values + nextOffset;
data->buckets = data->values + bucketOffset;
buffer = data;
}
public static void DeallocateHashSet(NativeHashSetData* data, Allocator allocator) {
UnsafeUtility.Free(data->values, allocator);
data->values = null;
data->buckets = null;
data->next = null;
UnsafeUtility.Free(data, allocator);
}
public void Clear() {
UnsafeUtility.MemClear((int*)buckets, sizeof(int) * (bucketCapacityMask + 1));
UnsafeUtility.MemClear((int*)next, sizeof(int) * valueCapacity);
fixed(int* firstFreeTLS = this.firstFreeTLS) {
UnsafeUtility.MemClear(
firstFreeTLS, sizeof(int) * (JobsUtility.MaxJobThreadCount * IntsPerCacheLine));
}
allocatedIndexLength = 0;
}
public void GetValueArray<T>(NativeArray<T> result) where T : struct {
var buckets = (int*)this.buckets;
var nextPtrs = (int*)next;
int outputIndex = 0;
for(int bucketIndex = 0; bucketIndex <= bucketCapacityMask; ++bucketIndex) {
int valuesIndex = buckets[bucketIndex];
while(valuesIndex > 0) {
result[outputIndex] = UnsafeUtility.ReadArrayElement<T>(values, valuesIndex - 1);
outputIndex++;
valuesIndex = nextPtrs[valuesIndex - 1];
}
}
Assert.AreEqual(result.Length, outputIndex);
}
public bool TryAdd<T>(ref T value, Allocator allocator) where T : struct, IEquatable<T> {
if(Contains(ref value)) {
return false;
}
int valuesIdx = FindFirstFreeIndex<T>(allocator);
UnsafeUtility.WriteArrayElement(values, valuesIdx, value);
// Add the index to the hashset
int* buckets = (int*)this.buckets;
int* nextPtrs = (int*)next;
int bucketIndex = value.GetHashCode() & bucketCapacityMask;
nextPtrs[valuesIdx] = buckets[bucketIndex];
buckets[bucketIndex] = valuesIdx + 1;
return true;
}
public bool TryAddThreaded<T>(ref T value, int threadIndex) where T : IEquatable<T> {
if(Contains(ref value)) {
return false;
}
// Allocate an entry from the free list
int idx = FindFreeIndexFromTLS(threadIndex);
UnsafeUtility.WriteArrayElement(values, idx, value);
// Add the index to the hashset
int* buckets = (int*)this.buckets;
int bucket = value.GetHashCode() & bucketCapacityMask;
int* nextPtrs = (int*)next;
if(Interlocked.CompareExchange(ref buckets[bucket], idx + 1, 0) != 0) {
do {
nextPtrs[idx] = buckets[bucket];
if(Contains(ref value)) {
// Put back the entry in the free list if someone else added it while trying to add
do {
nextPtrs[idx] = firstFreeTLS[threadIndex * IntsPerCacheLine];
} while(Interlocked.CompareExchange(
ref firstFreeTLS[threadIndex * IntsPerCacheLine], idx + 1,
nextPtrs[idx]) != nextPtrs[idx]);
return false;
}
} while(Interlocked.CompareExchange(ref buckets[bucket], idx + 1, nextPtrs[idx]) != nextPtrs[idx]);
}
return true;
}
public bool Contains<T>(ref T value) where T : IEquatable<T> {
if(allocatedIndexLength <= 0) {
return false;
}
int* buckets = (int*)this.buckets;
int* nextPtrs = (int*)next;
int bucket = value.GetHashCode() & bucketCapacityMask;
int valuesIdx = buckets[bucket] - 1;
while(valuesIdx >= 0 && valuesIdx < valueCapacity) {
if(UnsafeUtility.ReadArrayElement<T>(values, valuesIdx).Equals(value)) {
return true;
}
valuesIdx = nextPtrs[valuesIdx] - 1;
}
return false;
}
public bool TryRemove<T>(T key) where T : struct, IEquatable<T> {
int* buckets = (int*)this.buckets;
int* nextPtrs = (int*)next;
int bucketIdx = key.GetHashCode() & bucketCapacityMask;
int valuesIdx = buckets[bucketIdx] - 1;
int prevValuesIdx = -1;
while(valuesIdx >= 0 && valuesIdx < valueCapacity) {
if(UnsafeUtility.ReadArrayElement<T>(values, valuesIdx).Equals(key)) {
if(prevValuesIdx == -1) {
// Sets head->next to head->next->next(or -1)
buckets[bucketIdx] = nextPtrs[valuesIdx];
}
else {
// Sets prev->next to prev->next(current valuesIdx)->next
nextPtrs[prevValuesIdx] = nextPtrs[valuesIdx];
}
// Mark the index as free
nextPtrs[valuesIdx] = firstFreeTLS[0];
firstFreeTLS[0] = valuesIdx + 1;
return true;
}
prevValuesIdx = valuesIdx;
valuesIdx = nextPtrs[valuesIdx] - 1;
}
return false;
}
static int CalculateDataSize<T>(
int capacity, int bucketCapacity, out int nextOffset, out int bucketOffset) where T : struct {
nextOffset = (UnsafeUtility.SizeOf<T>() * capacity) + JobsUtility.CacheLineSize - 1;
nextOffset -= nextOffset % JobsUtility.CacheLineSize;
bucketOffset = nextOffset + (sizeof(int) * capacity) + JobsUtility.CacheLineSize - 1;
bucketOffset -= bucketOffset % JobsUtility.CacheLineSize;
return bucketOffset + (sizeof(int) * bucketCapacity);
}
int FindFirstFreeIndex<T>(Allocator allocator) where T : struct {
int valuesIdx;
int* nextPtrs = (int*)next;
// Try to find an index in another TLS.
if(allocatedIndexLength >= valueCapacity && firstFreeTLS[0] == 0) {
for(int tls = 1; tls < JobsUtility.MaxJobThreadCount; ++tls) {
int tlsIndex = tls * IntsPerCacheLine;
if(firstFreeTLS[tlsIndex] > 0) {
valuesIdx = firstFreeTLS[tlsIndex] - 1;
firstFreeTLS[tlsIndex] = nextPtrs[valuesIdx];
nextPtrs[valuesIdx] = 0;
firstFreeTLS[0] = valuesIdx + 1;
break;
}
}
// No indexes found.
if(firstFreeTLS[0] == 0) {
GrowHashSet<T>(DoubleCapacity(valueCapacity), allocator);
}
}
if(firstFreeTLS[0] == 0) {
valuesIdx = allocatedIndexLength;
allocatedIndexLength++;
}
else {
valuesIdx = firstFreeTLS[0] - 1;
firstFreeTLS[0] = nextPtrs[valuesIdx];
}
if(!(valuesIdx >= 0 && valuesIdx < valueCapacity)) {
throw new InvalidOperationException(
$"Internal HashSet error, values index: {valuesIdx} not in range of 0 and {valueCapacity}");
}
return valuesIdx;
}
int FindFreeIndexFromTLS(int threadIndex) {
int idx;
int* nextPtrs = (int*)next;
int thisTLSIndex = threadIndex * IntsPerCacheLine;
do {
idx = firstFreeTLS[thisTLSIndex] - 1;
if(idx < 0) {
// Mark this TLS index as locked
Interlocked.Exchange(ref firstFreeTLS[thisTLSIndex], -1);
// Try to allocate more indexes with this TLS
if(allocatedIndexLength < valueCapacity) {
idx = Interlocked.Add(ref allocatedIndexLength, 16) - 16;
if(idx < valueCapacity - 1) {
int count = math.min(16, valueCapacity - idx) - 1;
for(int i = 1; i < count; ++i) {
nextPtrs[idx + i] = (idx + 1) + i + 1;
}
nextPtrs[idx + count] = 0;
nextPtrs[idx] = 0;
Interlocked.Exchange(ref firstFreeTLS[thisTLSIndex], (idx + 1) + 1);
return idx;
}
if(idx == valueCapacity - 1) {
Interlocked.Exchange(ref firstFreeTLS[thisTLSIndex], 0);
return idx;
}
}
Interlocked.Exchange(ref firstFreeTLS[thisTLSIndex], 0);
// Could not find an index, try to steal one from another TLS
for(bool iterateAgain = true; iterateAgain;) {
iterateAgain = false;
for(int i = 1; i < JobsUtility.MaxJobThreadCount; i++) {
int nextTLSIndex = ((threadIndex + i) % JobsUtility.MaxJobThreadCount) * IntsPerCacheLine;
do {
idx = firstFreeTLS[nextTLSIndex] - 1;
} while(idx >= 0 && Interlocked.CompareExchange(
ref firstFreeTLS[nextTLSIndex], nextPtrs[idx], idx + 1) != idx + 1);
if(idx == -1) {
iterateAgain = true;
}
else if(idx >= 0) {
nextPtrs[idx] = 0;
return idx;
}
}
}
throw new InvalidOperationException("HashSet has reached capacity, cannot add more.");
}
if(idx > valueCapacity) {
throw new InvalidOperationException($"nextPtr idx {idx} beyond capacity {valueCapacity}");
}
// Another thread is using this TLS, try again.
} while(Interlocked.CompareExchange(
ref firstFreeTLS[threadIndex * IntsPerCacheLine], nextPtrs[idx], idx + 1) != idx + 1);
nextPtrs[idx] = 0;
return idx;
}
void GrowHashSet<T>(int newCapacity, Allocator allocator) where T : struct {
int newBucketCapacity = math.ceilpow2(newCapacity * 2);
if(newCapacity == valueCapacity && newBucketCapacity == (bucketCapacityMask + 1)) {
return;
}
if(valueCapacity > newCapacity) {
throw new ArgumentException("Shrinking a hashset is not supported");
}
int nextOffset, bucketOffset;
int totalSize = CalculateDataSize<T>(newCapacity, newBucketCapacity, out nextOffset, out bucketOffset);
byte* newValues = (byte*)UnsafeUtility.Malloc(totalSize, JobsUtility.CacheLineSize, allocator);
byte* newNext = newValues + nextOffset;
byte* newBuckets = newValues + bucketOffset;
UnsafeUtility.MemClear(newNext, sizeof(int) * newCapacity);
UnsafeUtility.MemCpy(newValues, values, UnsafeUtility.SizeOf<T>() * valueCapacity);
UnsafeUtility.MemCpy(newNext, next, sizeof(int) * valueCapacity);
// Re-hash the buckets, first clear the new buckets, then reinsert.
UnsafeUtility.MemClear(newBuckets, sizeof(int) * newBucketCapacity);
int* oldBuckets = (int*)buckets;
int* newNextPtrs = (int*)newNext;
for(int oldBucket = 0; oldBucket <= bucketCapacityMask; ++oldBucket) {
int curValuesIdx = oldBuckets[oldBucket] - 1;
while(curValuesIdx >= 0 && curValuesIdx < valueCapacity) {
var curValue = UnsafeUtility.ReadArrayElement<T>(values, curValuesIdx);
int newBucket = curValue.GetHashCode() & newBucketCapacity - 1;
oldBuckets[oldBucket] = newNextPtrs[curValuesIdx];
newNextPtrs[curValuesIdx] = ((int*)newBuckets)[newBucket];
((int*)newBuckets)[newBucket] = curValuesIdx + 1;
curValuesIdx = oldBuckets[oldBucket] - 1;
}
}
UnsafeUtility.Free(values, allocator);
if(allocatedIndexLength > valueCapacity) {
allocatedIndexLength = valueCapacity;
}
values = newValues;
next = newNext;
buckets = newBuckets;
valueCapacity = newCapacity;
bucketCapacityMask = newBucketCapacity - 1;
}
}
}
NativeHashSetTests.cs:
using System.Collections.Generic;
using NativeContainers;
using NUnit.Framework;
using Unity.Collections;
using Unity.Jobs;
using UnityEngine;
namespace NativeContainerTests {
public class NativeHashSetBasicTests {
const int HashSetInitialCapacity = 4;
NativeHashSet<int> testHashSet;
[OneTimeSetUp]
public void Setup() {
testHashSet = new NativeHashSet<int>(HashSetInitialCapacity, Allocator.Persistent);
}
[OneTimeTearDown]
public void TearDown() {
testHashSet.Dispose();
}
[Test, Order(0)]
public void Capacity_ShouldBeInitalValue() {
Assert.AreEqual(HashSetInitialCapacity, testHashSet.Capacity);
}
[Test, Order(1)]
public void Add_ShouldAdd1() {
testHashSet.TryAdd(1);
Assert.AreEqual(1, testHashSet.Length);
}
[Test, Order(2)]
public void Remove_ShouldRemove1() {
testHashSet.TryRemove(1);
Assert.AreEqual(0, testHashSet.Length);
}
[Test, Order(3)]
public void Add_ShouldReturnTrueOnUnique() {
Assert.IsTrue(testHashSet.TryAdd(1));
}
[Test, Order(4)]
public void Add_ShouldReturnFalseOnDuplicate() {
Assert.IsFalse(testHashSet.TryAdd(1));
}
[Test, Order(5)]
public void Remove_ShouldReturnTrueIfExists() {
Assert.IsTrue(testHashSet.TryRemove(1));
}
[Test, Order(6)]
public void Remove_ShouldReturnFalseIfNotExists() {
Assert.IsFalse(testHashSet.TryRemove(99));
}
[Test, Order(7)]
public void Clear_LengthShouldEqual0() {
testHashSet.TryAdd(1);
Assert.AreEqual(1, testHashSet.Length);
testHashSet.Clear();
Assert.AreEqual(0, testHashSet.Length);
}
[Test, Order(8)]
public void Contains_ShouldNotContainIfNotAdded() {
Assert.IsFalse(testHashSet.Contains(99));
}
}
public class NativeHashSetExtendedRandomTests {
const int RandomNumbersMinLength = 100;
const int RandomNumbersMaxLength = 200;
const int RandomNumberMaxValue = 1000000;
NativeHashSet<int> testHashSet;
NativeList<int> uniqueRandomNumbers;
[OneTimeSetUp]
public void Setup() {
uniqueRandomNumbers = new NativeList<int>(Allocator.Persistent);
var managedSet = new HashSet<int>();
var randomNumbersLength = Random.Range(RandomNumbersMinLength, RandomNumbersMaxLength);
for(int i = 0; i < randomNumbersLength; i++) {
managedSet.Add(Random.Range(0, RandomNumberMaxValue));
}
foreach(var num in managedSet) {
uniqueRandomNumbers.Add(num);
}
}
[OneTimeTearDown]
public void TearDown() {
uniqueRandomNumbers.Dispose();
}
[SetUp]
public void TestSetUp() {
testHashSet = new NativeHashSet<int>(uniqueRandomNumbers.Length, Allocator.Temp);
}
[TearDown]
public void TestTearDown() {
testHashSet.Dispose();
}
[Test]
public void Length_ShouldEqualRandomLength() {
for(int i = 0; i < uniqueRandomNumbers.Length; i++) {
testHashSet.TryAdd(uniqueRandomNumbers[i]);
}
Assert.AreEqual(uniqueRandomNumbers.Length, testHashSet.Length);
}
[Test]
public void Remove_ShouldRemoveRange() {
for(int i = 0; i < uniqueRandomNumbers.Length; i++) {
testHashSet.TryAdd(uniqueRandomNumbers[i]);
Assert.IsTrue(testHashSet.TryRemove(uniqueRandomNumbers[i]));
}
Assert.AreEqual(0, testHashSet.Length);
}
[Test]
public void Contains_ShouldContainRandomNumber() {
var randomNum = uniqueRandomNumbers[Random.Range(0, uniqueRandomNumbers.Length)];
testHashSet.TryAdd(randomNum);
Assert.IsTrue(testHashSet.Contains(randomNum));
}
[Test]
public void Contains_ShouldContainRandomRange() {
for(int i = 0; i < uniqueRandomNumbers.Length; i++) {
testHashSet.TryAdd(uniqueRandomNumbers[i]);
Assert.IsTrue(testHashSet.Contains(uniqueRandomNumbers[i]));
}
}
[Test]
public void GetValueArray_ShouldReturnCorrectRandomLength() {
for(int i = 0; i < uniqueRandomNumbers.Length; i++) {
testHashSet.TryAdd(uniqueRandomNumbers[i]);
}
var values = testHashSet.GetValueArray(Allocator.Temp);
Assert.AreEqual(uniqueRandomNumbers.Length, values.Length);
}
[Test]
public void GetValueArray_ShouldReturnAllValues() {
var managedSet = new HashSet<int>();
for(int i = 0; i < uniqueRandomNumbers.Length; i++) {
managedSet.Add(uniqueRandomNumbers[i]);
testHashSet.TryAdd(uniqueRandomNumbers[i]);
}
var values = testHashSet.GetValueArray(Allocator.Temp);
Assert.AreEqual(managedSet.Count, values.Length);
for(int i = 0; i < values.Length; i++) {
Assert.IsTrue(managedSet.Contains(values[i]));
}
}
}
public class NativeHashSetJobRandomTests {
const int RandomNumbersMinLength = 500;
const int RandomNumbersMaxLength = 2000;
const int RandomNumberMaxValue = 1000000;
NativeHashSet<int> testHashSet;
NativeList<int> uniqueRandomNumbers;
struct AddJob : IJobParallelFor {
public NativeHashSet<int>.Concurrent HashSet;
[ReadOnly] public NativeArray<int> ToAdd;
public void Execute(int index) {
HashSet.TryAdd(ToAdd[index]);
}
}
[OneTimeSetUp]
public void Setup() {
var randomNumbersLength = Random.Range(RandomNumbersMinLength, RandomNumbersMaxLength);
uniqueRandomNumbers = new NativeList<int>(randomNumbersLength, Allocator.Persistent);
var managedSet = new HashSet<int>();
for(int i = 0; i < randomNumbersLength; i++) {
managedSet.Add(Random.Range(0, RandomNumberMaxValue));
}
foreach(var num in managedSet) {
uniqueRandomNumbers.Add(num);
}
}
[OneTimeTearDown]
public void TearDown() {
uniqueRandomNumbers.Dispose();
}
[SetUp]
public void SetUpHashSet() {
if(testHashSet.IsCreated) {
testHashSet.Dispose();
}
testHashSet = new NativeHashSet<int>(uniqueRandomNumbers.Length, Allocator.TempJob);
var addJob = new AddJob {
HashSet = testHashSet.ToConcurrent(),
ToAdd = uniqueRandomNumbers.AsArray()
}.Schedule(uniqueRandomNumbers.Length, 1);
addJob.Complete();
}
[Test]
public void Add_ShouldAddRange() {
Assert.AreEqual(uniqueRandomNumbers.Length, testHashSet.Length);
}
[Test]
public void Contains_ShouldContainRandomRange() {
for(int i = 0; i < uniqueRandomNumbers.Length; i++) {
Assert.IsTrue(testHashSet.Contains(uniqueRandomNumbers[i]));
}
}
[Test]
public void Contains_ShouldContainRandomRangeAfterRemovingSome() {
var numberToRemove = Random.Range(0, uniqueRandomNumbers.Length + 1);
for(int i = 0; i < numberToRemove; i++) {
testHashSet.TryRemove(uniqueRandomNumbers[i]);
}
Assert.AreEqual(uniqueRandomNumbers.Length - numberToRemove, testHashSet.Length);
for(int i = numberToRemove; i < uniqueRandomNumbers.Length; i++) {
Assert.IsTrue(testHashSet.Contains(uniqueRandomNumbers[i]));
}
}
[Test]
public void Clear_ShouldClearAdditions() {
Assert.AreEqual(uniqueRandomNumbers.Length, testHashSet.Length);
testHashSet.Clear();
Assert.AreEqual(0, testHashSet.Length);
}
}
}
As far as i could tell only Dispose is missing setting buffer to null
public void Dispose() {
#if ENABLE_UNITY_COLLECTIONS_CHECKS
AtomicSafetyHandle.CheckDeallocateAndThrow(m_Safety);
DisposeSentinel.Dispose(ref m_Safety, ref m_DisposeSentinel);
#endif
NativeHashSetData.DeallocateHashSet(buffer, allocator);
buffer = null;
}
struct Empty {}
NativeHashMap<T, Empty>
I believe this works.
Fixed for posterity.
Appreciate you sharing your code here
In the NativeHashSetData.cs on line 196 āwhile (values)ā I get Error CS0029: Type byte* canāt be implicitly converted to bool.
Should that be āwhile (values != null)ā ? I have never written āunsafeā code in C# so I am just guessing.
Edit: That whole TryRemoveThreaded methode seems to be faulty, amongst other things it also does not return a bool as it should at all.
Did you enabled Allow āunsafeā code in you unity player settings?
yah I did, that was another issue I had but resolved. This is the method in NativeHashSetData which is causing the problems:
public bool TryRemoveThreaded<T>(T value, int threadIndex) where T : IEquatable<T> {
int* buckets = (int*)this.buckets;
int* nextPtrs = (int*)this.next;
int bucketIdx = value.GetHashCode() & bucketCapacityMask;
int valuesIdx = buckets[bucketIdx];
int prevValuesIdx;
do {
valuesIdx = buckets[bucketIdx];
while(values)
} while(valuesIdx != 0)
while(valuesIdx != 0) {
if(valuesIdx < 0) {
valuesIdx = buckets[bucketIdx];
}
}
}
now the first error is that it has bool as return value, additionally values is a byte* and so āwhile(values)ā causes the mentioned error.
TryRemoveThreaded() should not have existed in the code I gave, it was something I was experimenting with and was incomplete in my commits. NativeHashMap(which this is based on) doesnāt support a jobified Remove() but should technically be possible. Of course if you remove and add in the same job, you get no guarantees as to what the result is. It sort of passed my tests but edge cases were buggy and I got bored, Iāll remove that in my post. Idk how you accessed it though, unless you added the wrapper or called it directly.
It should be noted that Clear() in my version has a 10x performance improvement, depending on what youāre doing, that may be an option, and you can re add what you need:
[See proposed memclear solution] Clear() on large NativeMultiHashMaps is causing performance issues.
I do all my native container clears on IJobs and it can have a big impact depending on what you are doing.
Here are a few generic jobs for that:
[BurstCompile]
struct ClearNativeList<T> : IJob where T : struct {
public NativeList<T> Source;
public void Execute() {
Source.Clear();
}
}
[BurstCompile]
struct ClearNativeQueue<T> : IJob where T : struct {
public NativeQueue<T> Source;
public void Execute() {
Source.Clear();
}
}
[BurstCompile]
struct ClearNativeHashMap<T1, T2> : IJob where T1 : struct, IEquatable<T1> where T2 : struct {
public NativeHashMap<T1, T2> Source;
public void Execute() {
Source.Clear();
}
}
[BurstCompile]
struct ClearNativeMultiHashMap<T1, T2> : IJob where T1 : struct, IEquatable<T1> where T2 : struct {
public NativeMultiHashMap<T1, T2> Source;
public void Execute() {
Source.Clear();
}
}
@RecursiveEclipse
There must be a bug somewhere in the constructor.
var set = new NativeHashSet<int4>(1000, Allocator.Persistent);
This makes Unity crash. The problem is , simple seems to work fine.
Iāve attached the repro project for 2019.1.3f1.
PS
Line 191 in NativeHashSetData.cs looks suspicious:
bucketOffset = nextOffset + (sizeof(int) * capacity) + JobsUtility.CacheLineSize - 1;
I changed sizeof(int) to UnsafeUtility.SizeOf() and Unity stopped crashing. Iām not sure if itās a correct fix and also if itās the only place.
Iām unsure why that is, UnsafeUtility.SizeOf should just be a call to sizeof, but I still get crashes with your project even after changing everything.
I mean, sizeof(int) is not the same as sizeof(T) when T is not int.
Oh, yeah thatāll do it. Bad day I guess, fixed.
EDIT: Maybe double oopsie, the original code should be correct, or close. Iāll have to investigate.
Okay, now fixed for real. Was caused by bad MemClear length. The original was correct because next/buckets are just indexes(ints)/pointers to the values array.
@RecursiveEclipse is there any reason for the Assert.AreEqual in GetValueArray (line 94 in NativeHashSetData.cs)?
Also would something like this be fine for converting to managed C#?
public List<T> ToList<T>() {
List<T> result = new List<T>();
var buckets = (int*)this.buckets;
var nextPtrs = (int*)next;
for(int bucketIndex = 0; bucketIndex <= bucketCapacityMask; ++bucketIndex) {
int valuesIndex = buckets[bucketIndex];
while(valuesIndex > 0) {
result.Add(UnsafeUtility.ReadArrayElement<T>(values, valuesIndex - 1));
valuesIndex = nextPtrs[valuesIndex - 1];
}
}
return result;
}
Most likely because itās there in the NativeHashMap implementation when I ported it over. And I donāt see why ToList couldnāt be added, canāt say how it functions in a job environment though. Why do you need managed list though?
A managed Dictionary is almost twice as fast as an unmanaged NativeHashMap:
This is mainly because a NativeHashMap canāt modify values so you have to Remove() and TryAdd() for every modification. But in the screenshots you also see that Dictionary.TryGetValue is faster than NativeHashMap.TryGetValue.
//edit I just realized I could add the NativeArray to the Dictionary.