One of the expensive jobs in my project clears a large NativeMultiHashMap. This map has a capacity of 10,000 (I’m running some stress tests). This takes, on average, 1.19ms. Since later jobs depend on this map, it acts as a sync point, holding up other threads for that time.
I’m looking for advice about how this might be improved. A few ideas, some of which may be dumb:
-
Look into clearing the NativeMultiHashMap in parallel somehow.
-
Avoid clearing the NativeMultiHashMap: Look into ways to overwrite previous map values, which were added in the last frame. Differentiating them with a ‘generation’ integer. Jobs which read from the map would filter its values, and only processing ones with the correct generation.
Confidence level in these ideas: 10% 
If you know any techniques which are useful for this problem, please share them! I would sincerely appreciate the help. 
I solved this by having later jobs which read from the NativeMultiHashMap also remove keys from it when reading was complete. Worked wonderfully - allowed me to completely skip the step of clearing the hashmap.
One question I have about this approach:
Is it safe to call NativeMultiHashMap.Remove() in parallel, from multiple threads, to remove a different key each? Or does removing any key from the hashmap rearrange its internal bucket layout?
Generally speaking, no operation is thread-safe unless it is part of the ParallelWriter
.
About the original issue – calling Clear
on a (multi-)hashmap should be relatively cheap. Can you share some more details about this specific case?
Thank you for the reply. Some further details:
- This is a NativeMultiHashMap<Entity, CustomStruct>, with a capacity of 200,000 (apologies, I miscalculated in my original post).
- CustomStruct is 16 bytes (2 ints and an Entity). So the values alone would = 3,200,000 bytes.
This hashmap clears in 1.19 ms in a Bursted job (which is only clearing the hashmap). Maybe that is actually relatively fast, consider the numbers?
How fast would you expect that Clear operation to take? Many thanks, again.