I’m currently jobifying a custom Tool that splits +300K geometry into handable chunks.
However as soon as I start the process, my RAM ussage blows up quickly beyond my systems capacity (16gb). Im having a NativeMultiHashMap<int, int> and a NativeArray, both with a capacity of ~300.000 elements. The HashMap is filled with data by a IJobForParallel.
One guess. Your code has a bug where it’s creating way more then 300k entries in NativeMultiHashMap. HashMaps have this undocumented feature where they auto expand if needed doubling the current capacity every time they exceed their current capacity.
I’ve switched to a permanent allocation now to at least avoid the possibliy of the first one. Also I modified my Job to emit always the same hash → still memory issues.
Since the pure creation of the Job causes these issues, I’m sure now that NativeHashMap<>.Concurrent causes some kind of “garbage” thats is not cleaned. Since this struct doesn’t implement something like .Dispose() I’m in a dead end now.
Does this only happen in the editor, or in a standalone build too? If build works OK it might be the native collection safety checks; they do some allocations.