Iterate through an unknown quantity of neighboring nodes recursively with Job System

I have an algorithm that goes through an existing list of items/nodes on a 2d grid, makes conditional checks, and assigns a calculated value between 0 and 5 to each item that qualifies, and removes the ones that don’t. Each item knows the 4 neighboring items, so it keeps performing these conditional checks until no neighboring nodes qualify.

In psuedo-code it looks something like:

newNodes = new List;
discardedNodes = new List;

newNodes.AddRange(myNodes);
myNodes.Clear();

while(newNodes.Length > 0)
{
    node = newNodes[0];

    if (!myNodes.Contains(node) || !discardedNodes.Contains(node))
    {
        if (nodePassesConditionalChecks)
        {
            node.value = calculatedValue;
            myNodes.Add(node);

            foreach(var neighbour in node.neighbours)
            {
                if(!myNodes.Contains(neighbour) && !discardedNodes.Contains(neighbour)
               {
                   newNodes.Add(neighbour)
               }
            }
        }
        else
        {
            discardedNodes.Add(node)
        }
    }

    newNodes.Remove(node);
}

foreach (var node in myNodes)
    node.Method1();

foreach (var node in discardedNodes)
    node.Method2();

The code works (in single thread), but since the conditional checks are somewhat expensive, count of operations somewhat high, and is repeated somewhat frequently, it has a noticeable effect on framerate.
So I’d like to use the job system, but I’m not sure how to pull it off, or if it’s even possible.

The two issues I see:

A. Each loop reads/writes to the same lists (myNodes, newNodes & discardedNodes)
B. There is no way to predict how many iterations will be required, so can’t use IJobParallelFor

The only solutions to the two issues I can think of:

A. Skip (!myNodes.Contains(node) || !discardedNodes.Contains(node) and check for duplicates in main thread after Job is complete.

B. Split the loop into multiple steps, to allow for IJobParallelFor, like: 1. Check list in one job and create list of new neighbours 2. (next frame) check neighbours, and for the valid ones, add their neighbours 3. Repeat previous step x amount of frames.

Each solution is very flawed, so I’m looking for a smarter solution. Any ideas?

Last time i worked with jobs is a bit in the past.

However, you cannot have a list-of-lists type of objects in jobs. So your nodes, which seems to contain a list of neighbors, wont be accepted as is. Instead you will have to work out a different way to find these neighbord in the job.

You usually split workflows like that into several sequential jobs with dependencies. Given an array of your nodes, you could, for example, first run the function checking the condition for each node. Save each result in another array at the same index and after this job completed, you know the length of the otehr arrays required, since you can count how many passed or did not. And so on.

1 Like

I intend to use multiple NativeArrays identifiable by index, in place of List of Lists, but it will require some work on the main thread to intialize the NativeArrays for each iteration (collecting neighbours etc). Hopefully it won’t be a big performance hit.

Yeah, sounds pretty close to my “solution B”, unless I misunderstand. I was hoping there was a “simpler” solution than splitting the jobs up, but I guess that’s the way to go.
As a novice at Job System, it sounds a bit messy, but I’ll figure it out.

That does however also mostly solve my Issue A! If I’m using multiple sequential jobs, i could add results from NativeArrays produced by jobs to my main thread lists, which then get fed back to next job. Then checking for “myNodes.Contains(node)” and “discardedNodes.Contains(node)” inside jobs will work, because all the relevant data is already there.

Just to make this clear. In your description of approach B, you seem to assume you have to run these jobs separated by one frame. That is not the case. Simply define the jobs with dependencies A → B → … → N, then launch the jobs and wait for them at the end of your frame. Unity makes sure to run jobs in the proper order and priority as long as dependencies are defined.

You could even make your lists in the main thread native as well, with the persistant allocator. That way you could simply replace it or pass it to a job without having to copy the contents around. Unless there was some constraint in DOTS which i dont remember which didnt allow for this. DOTS is amazing but working with it (at least when i did) was a nightmare. So many constraints, the least of which documented, documentation outdated, few working examples floating around… really frustrating at times to be sure haha. But it was / is still under heavy development so thats to be expected.

I’m sorry, I don’t think I quite follow. Could you please expand on this?
Going A → B → … → N, we don’t know how many iterations B, C, et.c. will need until all the previous steps are completed (we don’t know how many valid nodes A will find, so we don’t know how many and which neighbors will be added for B to process), meaning I’d need to wait for them to complete before scheduling the next one, no? Since we’re supposed to “schedule early, complete late”, I figured that should be applied to each individual job.

Pretty sure that’s allowed, I’ve done something similar before (stored vertices of a LOD0 mesh in a Persistent NativeArray, and used jobs to generate various LOD meshes as needed). The drawbacks are that NativeArrays can’t contain managed types, and I believe they are slower than managed arrays if used without Burst compiler. So I’ll still need to convert between Lists of Classes and NativeArrays.

Agree on all accounts. I’ve got bits of my application working with the job system, and it’s very satisfying once it’s working, but so frustrating to get there.

First of all, no you can schedule them at the same time - as long as you find a way to provide all information required by the jobs. This part can be a bit tricky sometimes.
When i encountered a similar problem, of not knowing the correct size for an array / list until after the first jobs ran, i handled it as follows. Now i honestly dont even know anymore why this was problematic, since Lists should be able to resize… but im pretty sure there was a good reason for what i did, so it was likely some random DOTS constraint i worked around.
Anyways, I created a little job which forcefully adjusts the size of an existing list to the correct size, which gets calculated in a previous job. For that you have to pass around an integer value, which in my case needed to be a native container as well since i wrote to it in a parallel job.

    [BurstCompile]
    private struct AdjustListSizeJob : IJob
    {
        // Inputs:
        [ReadOnly] [DeallocateOnJobCompletion] public NativeIntPtr addedSize;

        // Outputs:
        [WriteOnly] public NativeList<Vector3> emptyList;

        public void Execute()
        {
            emptyList.Capacity = addedSize.Value;
            emptyList.AddRange(new NativeArray<Vector3>(addedSize.Value, Allocator.Temp));
        }
    }

Not very elegant, but it works. Keep in mind there may be a better solution now, or what i did may even only be necessary for my project. I honestly dont remember anymore :smile:

1 Like

Thanks for all the suggestions! I’ll try work with it!