I have a NativeParallelMultiHashMap I’m adding to in a job. It starts with a capacity of 10000, but can get quite a bit larger than that. Re-sizing works fine until it reaches 51200000 elements, after which it tries to allocate more memory than currently exists on the planet. I’ve never run into anything like this, so I’m hoping someone else that has can help me figure out why this could be happening.
This is the full error:
If an element is a single byte this would amount to 50 MiB (ignoring any hashmap overhead). If the element is a struct totalling 100 bytes you’d be allocating 5 GiB. The question then is: what type is “size”? If it’s a uint it can allocate just about 4 GiB. If it’s a ulong it won’t be an issue and only installed memory is the limit.
Uhm … line 4756?
I think you need to get your script sizes in check. I would never have a script run this long, especially not if it contains complex logic. Mental overload.
You’re seeing a case where the allocation for all the parts necessary to match the new capacity ends up overflowing the 2GB (minus 1 byte) maximum value for a signed 32-bit integer, resulting in a negative value, and that just gets sign-extended up to a corresponding 64-bit value that looks large when the bits are interpreted as unsigned. In reality, the intended allocation is around 2.9 GB. This really just comes from UnsafeParallelHashMapData.CalculateDataSize only working with int offsets and returning an int allocation size. Changing that to use long for offsets / actual allocation and updating the calling code to also use longs should avoid the overflow and fix the allocation error. This might have side effects on collections that do exceed 2GB allocations, like if the rest of the code wasn’t written for addressing >= 2GB of allocated memory. The way I see it, you have a choice between reworking your data model to be more compact and trying to patch the container to support larger allocations and then fixing bugs that crop up because of that. You may consider filing a bug report as well.
Just FYI, newer releases (2.x Collections) reworded the error message to state that the allocated size is negative, as opposed to the vague “nonsensical” message you see here in 1.4.0.
[BurstCompatible(GenericTypeArguments = new [] { typeof(int), typeof(int) })]
internal static void AllocateHashMap<TKey, TValue>(int length, int bucketLength, AllocatorManager.AllocatorHandle label,
out UnsafeParallelHashMapData* outBuf)
where TKey : struct
where TValue : struct
{
CollectionHelper.CheckIsUnmanaged<TKey>();
CollectionHelper.CheckIsUnmanaged<TValue>();
UnsafeParallelHashMapData* data = (UnsafeParallelHashMapData*)Memory.Unmanaged.Allocate(sizeof(UnsafeParallelHashMapData), UnsafeUtility.AlignOf<UnsafeParallelHashMapData>(), label);
bucketLength = math.ceilpow2(bucketLength);
data->keyCapacity = length;
data->bucketCapacityMask = bucketLength - 1;
long keyOffset, nextOffset, bucketOffset;
long totalSize = CalculateDataSize<TKey, TValue>(length, bucketLength, out keyOffset, out nextOffset, out bucketOffset);
data->values = (byte*)Memory.Unmanaged.Allocate(totalSize, JobsUtility.CacheLineSize, label);
data->keys = data->values + keyOffset;
data->next = data->values + nextOffset;
data->buckets = data->values + bucketOffset;
outBuf = data;
}
[BurstCompatible(GenericTypeArguments = new [] { typeof(int), typeof(int) })]
internal static void ReallocateHashMap<TKey, TValue>(UnsafeParallelHashMapData* data, int newCapacity, int newBucketCapacity, AllocatorManager.AllocatorHandle label)
where TKey : struct
where TValue : struct
{
newBucketCapacity = math.ceilpow2(newBucketCapacity);
if (data->keyCapacity == newCapacity && (data->bucketCapacityMask + 1) == newBucketCapacity)
{
return;
}
CheckHashMapReallocateDoesNotShrink(data, newCapacity);
long keyOffset, nextOffset, bucketOffset;
long totalSize = CalculateDataSize<TKey, TValue>(newCapacity, newBucketCapacity, out keyOffset, out nextOffset, out bucketOffset);
byte* newData = (byte*)Memory.Unmanaged.Allocate(totalSize, JobsUtility.CacheLineSize, label);
byte* newKeys = newData + keyOffset;
byte* newNext = newData + nextOffset;
byte* newBuckets = newData + bucketOffset;
// The items are taken from a free-list and might not be tightly packed, copy all of the old capcity
UnsafeUtility.MemCpy(newData, data->values, data->keyCapacity * UnsafeUtility.SizeOf<TValue>());
UnsafeUtility.MemCpy(newKeys, data->keys, data->keyCapacity * UnsafeUtility.SizeOf<TKey>());
UnsafeUtility.MemCpy(newNext, data->next, data->keyCapacity * UnsafeUtility.SizeOf<int>());
for (int emptyNext = data->keyCapacity; emptyNext < newCapacity; ++emptyNext)
{
((int*)newNext)[emptyNext] = -1;
}
// re-hash the buckets, first clear the new bucket list, then insert all values from the old list
for (int bucket = 0; bucket < newBucketCapacity; ++bucket)
{
((int*)newBuckets)[bucket] = -1;
}
for (int bucket = 0; bucket <= data->bucketCapacityMask; ++bucket)
{
int* buckets = (int*)data->buckets;
int* nextPtrs = (int*)newNext;
while (buckets[bucket] >= 0)
{
int curEntry = buckets[bucket];
buckets[bucket] = nextPtrs[curEntry];
int newBucket = UnsafeUtility.ReadArrayElement<TKey>(data->keys, curEntry).GetHashCode() & (newBucketCapacity - 1);
nextPtrs[curEntry] = ((int*)newBuckets)[newBucket];
((int*)newBuckets)[newBucket] = curEntry;
}
}
Memory.Unmanaged.Free(data->values, label);
if (data->allocatedIndexLength > data->keyCapacity)
{
data->allocatedIndexLength = data->keyCapacity;
}
data->values = newData;
data->keys = newKeys;
data->next = newNext;
data->buckets = newBuckets;
data->keyCapacity = newCapacity;
data->bucketCapacityMask = newBucketCapacity - 1;
}
[BurstCompatible(GenericTypeArguments = new [] { typeof(int), typeof(int) })]
internal static long CalculateDataSize<TKey, TValue>(int length, int bucketLength, out long keyOffset, out long nextOffset, out long bucketOffset)
where TKey : struct
where TValue : struct
{
var sizeOfTValue = UnsafeUtility.SizeOf<TValue>();
var sizeOfTKey = UnsafeUtility.SizeOf<TKey>();
var sizeOfInt = UnsafeUtility.SizeOf<int>();
var valuesSize = CollectionHelper.Align(sizeOfTValue * length, JobsUtility.CacheLineSize);
var keysSize = CollectionHelper.Align(sizeOfTKey * length, JobsUtility.CacheLineSize);
var nextSize = CollectionHelper.Align(sizeOfInt * length, JobsUtility.CacheLineSize);
var bucketSize = CollectionHelper.Align(sizeOfInt * bucketLength, JobsUtility.CacheLineSize);
long totalSize = (long)valuesSize + keysSize + nextSize + bucketSize;
keyOffset = 0 + valuesSize;
nextOffset = keyOffset + keysSize;
bucketOffset = nextOffset + nextSize;
return totalSize;
}
Keys are Vector3Ints and values are a struct containing three ushorts and a half. As for the script length: its a terrain generation tool so almost none of the logic can run independently. Not cleanly anyway. I have all the code sorted and commented well so it’s relatively easy to find what I’m looking for.
Thank you for explaining. I couldn’t find anything with just the “nonsensical size” error message. I had already planned on reworking the data structure, so I’ll probably make that a priority now. Do you think that patching the collection like you suggested would work well as a stopgap, or would it be better to just rework my code ASAP?
I’m generally in favor of adapting to known limitations over trying to change the underlying behavior. If you hit a point where you’re forced to try and modify the package, it would be prudent to make some test cases with the Test Framework (uses NUnit) to make sure data stays usable for very large collections.
I realized the modification I suggested doesn’t include handling section sizes (keys, values, etc.) >= 2 GB, so that could still be a problem. Additional changes like below could help.
[BurstCompatible(GenericTypeArguments = new [] { typeof(int), typeof(int) })]
internal static long CalculateDataSize<TKey, TValue>(int length, int bucketLength, out long keyOffset, out long nextOffset, out long bucketOffset)
where TKey : struct
where TValue : struct
{
var sizeOfTValue = UnsafeUtility.SizeOf<TValue>();
var sizeOfTKey = UnsafeUtility.SizeOf<TKey>();
var sizeOfInt = UnsafeUtility.SizeOf<int>();
ulong valuesSize = CollectionHelper.Align((ulong)sizeOfTValue * (ulong)length, JobsUtility.CacheLineSize);
ulong keysSize = CollectionHelper.Align((ulong)sizeOfTKey * (ulong)length, JobsUtility.CacheLineSize);
ulong nextSize = CollectionHelper.Align((ulong)sizeOfInt * (ulong)length, JobsUtility.CacheLineSize);
ulong bucketSize = CollectionHelper.Align((ulong)sizeOfInt * (ulong)bucketLength, JobsUtility.CacheLineSize);
long totalSize = (long)(valuesSize + keysSize + nextSize + bucketSize);
keyOffset = 0 + valuesSize;
nextOffset = keyOffset + keysSize;
bucketOffset = nextOffset + nextSize;
return totalSize;
}