Large NativeParallelHashMap Slowing Down Performance

Hello!

I am working on a voxel project and I am looking at the task of pathfinding. I implemented a Dykstra algorithm in three-dimensional space and it appears to be working alright. The main snag was that when the path was calculated for just one entity, the game would experience a sever frame drop for about half a second.

My next step was to move the pathfinding to a Job to perform the pathfinding on a separate thread (as I had heard was a popular approach), and this did not seem too daunting since I already have the block/chunk generation done via threading anyways. Here is my strategy:

  • Initialize a NativeParallelHashMap of a fairly large size.
  • During chunk generation, pass each block type (enum) to the hash map with its world coordinates (Vector3) as the key.
  • Pass this hash map to the pathfinding job for navigation.

However, I get stuck on step two when passing the blocks to the hash map. After the first two chunks, the execution begins to exponentially slow down and the project is unplayable in less than three seconds. Are NativeParallelHashMaps not good for storing large amounts of data?

Code:
Block generation (chunk sizes are currently 16 x 16 x 100)

            for (int xIndex = 0; xIndex < jobChunkWidth; xIndex++) {
                for (int zIndex = 0; zIndex < jobChunkWidth; zIndex++) {
                    int groundHeight = (int)GetHeightForCoord(new Vector2(jobAnchorPoint.x + xIndex, jobAnchorPoint.y + zIndex));
                    for (int yIndex = 0; yIndex < jobChunkHeight; yIndex++) {
                        int blockIndex = ConvertVectorToInt(new Vector3(xIndex, yIndex, zIndex));
                        if (yIndex < groundHeight - jobStoneLine) {
                            jobBlocks[blockIndex] = BlockType.Dirt;
                        } else if (yIndex < groundHeight) {
                            jobBlocks[blockIndex] = BlockType.Dirt;
                        } else if (yIndex == groundHeight && yIndex <= jobWaterLevel) {
                            jobBlocks[blockIndex] = BlockType.Sand;
                        } else if (yIndex == groundHeight && yIndex > jobWaterLevel) {
                            jobBlocks[blockIndex] = BlockType.Grass;
                        } else {
                            jobBlocks[blockIndex] = BlockType.Air;
                        }
                        jobBlockLibrary[ConvertIntToWorldVector(blockIndex)] = jobBlocks[blockIndex]; // Adding the block to the hash map.
                    }
                }
            }

I think iterating over the hash map is not very performant. Think of it like an dictionary, direct key R/W is fast, iterating is slow. I use one for a quadrant system for thousands of entities and the performance blows my mind. The only time I start running into performance problems is when I try to iterate over a large portion of the hash map to detect targets in a radius.

I found a few ways to limit the cost. The main one is batching so that only a limited number of entities do this heavy operation every frame. In my case I also try to restrict the total number of quadrants to check.

Check how many keys you have in your hash map. If these don’t correspond to how many unique blocks you have, it’s possible you’re not correctly setting unique keys. You’re piping things through ConvertVectorToInt which odd. Perhaps blockIndex isn’t entirely unique for different input Vector3. Also, your input Vector3 doesn’t take into account any offset of the block in the world, so if you’re not doing adding any additional context in the other methods, every chunk would end up writing to the same set of keys. Consider instead using int3 with the x/y/z components offset by world position.

An algorithm’s inherent complexity, efficiency of implementation, and inputs are all potential factors for execution time. Try using deep profiling or custom profiler markers to find what’s taking the most [disproportionate] amount of time to execute as a starting point to try and optimize.

Also, just generally, try to use data types / methods from Unity Mathematics (not things like Vector3 as you’re doing here) and compile with Burst if you’re not doing so already.

I assume you mean Dijkstra’s algorithm for pathfinding, not Dykstra’s projection algorithm.

I am not iterating through the hash map. I iterate along the x, y, and z axes and add a new block value to the hash map for each unique spot. Each chunk waits in a calculation queue, and only one job can be started at a time through this queue, so there is no risk of multiple jobs running in the same frame. The pathfinding algorithm is irrelevant at the moment since I am currently experiencing performance issues just at the chunk generation phase by adding each block to the hash map.

I have checked the key count several times and it is solid. I have also verified that the blockIndex is not duplicating (via hours of pouring over chunk data at my most desperate times). The “ConvertIntToWorldVector” adds the offset you mentioned, so there is no re-write issue there. In an earlier version, I accidentally had it re-writing due to this very issue and it actually worked much better since there was a very limited number of blocks being stored.

I am not familiar with profiling, so I will need to look into that and I will read about Burst, too. Thank you.

(Yes, I am using Djikstra’s algorithm. Pardon the typo.)

1 Like