Extrem RAM usage with NativeMultiHashMap and NativeArray

I’m currently jobifying a custom Tool that splits +300K geometry into handable chunks.
However as soon as I start the process, my RAM ussage blows up quickly beyond my systems capacity (16gb). Im having a NativeMultiHashMap<int, int> and a NativeArray, both with a capacity of ~300.000 elements. The HashMap is filled with data by a IJobForParallel.

The plain C# never had any issues like that.

Any Ideas?

One guess. Your code has a bug where it’s creating way more then 300k entries in NativeMultiHashMap. HashMaps have this undocumented feature where they auto expand if needed doubling the current capacity every time they exceed their current capacity.

I tried reducing the initial capacity, but I’m getting System.Exceptions that the hashmap is not large enough. :frowning:

Other guesses I can’t check on right now:

  • Dispose() is somewhat not executed immediately

  • Each new Key causes the NativeMultiHashMap to double it’s size

I’ve switched to a permanent allocation now to at least avoid the possibliy of the first one. Also I modified my Job to emit always the same hash → still memory issues.

Since the pure creation of the Job causes these issues, I’m sure now that NativeHashMap<>.Concurrent causes some kind of “garbage” thats is not cleaned. Since this struct doesn’t implement something like .Dispose() I’m in a dead end now. :frowning:

Does this only happen in the editor, or in a standalone build too? If build works OK it might be the native collection safety checks; they do some allocations.

Disabled Safety Check, still no luck. It’s a editor only tool btw.

Any ideas? Like I said: the culprit is the NativeHashMap<>.Concurrent causing undisposable garbage.

I have worked extensively with NMHM in previous ECS versions without allocation issues (but moved to dynamic buffers for speed reasons in my case)

Would you be able to show a code example that replicates the issue.