I found a very curious thing. I accidentally made numbers smaller in a NativeHashMap. Suddenly my code ran waaaay faster. The speedup came from a function that does a lot of hash checks. So now I’ve rewritten some code to scale my values up and down, which feels odd…
if you have a lot of hash collisions, the collection could be a lot slower, multiplying it must be reducing those hash collisions - it’s not about the actual size of the numbers
Ok! I’m using hash to check for duplicate vertices. So that would mean that I’m getting hash collisions for different float3s between 0-100 in magnitude, and less hash collisions between 0-1. Weird.