Reducing time-complexity of object pooling algorithm with nested loops

So I watched the brackeys object pooling video, and I’ve been messing around with the idea. In his example, he uses a for loop inside of a foreach, with the foreach looping through a list of class instances (aptly named pools) with each instance representing an amount of prefabs to instantiate. Each entry gets added to a dictionary as a Queue. The inner for loop represents the actual instantiation of each individual game object within the current Queue.

I remember seeing a video on reducing time-complexity in algorithms that are O(n^2) because of nested looping but I vaguely remember the implementation and even then, the example was in javascript and I’d have a hard time wrapping my head around it in the context of unity and C# gamedev. But I specifically remember the guy used some number of counter variables to separate the loops and iterate them sequentially rather than nested… or something like that lol

Anyways, I was wondering if anyone here had any ideas. Heres my current code:

using System.Collections.Generic;
using UnityEngine;

namespace Game
{
    public class Pooler : MonoBehaviour
    {
        [SerializeField] List<Pool> pools;
        Dictionary<EntityType, Queue<GameObject>> Registry;

        void Awake()
        {
            Registry = new Dictionary<EntityType, Queue<GameObject>>();
            Generate();
        }

        void Generate()
        {
            foreach (Pool pool in pools)
            {
                Queue<GameObject> currentQueue = new Queue<GameObject>();
                for (int i = 0; i < pool.size; i++)
                {
                    GameObject currentObject = Instantiate(pool.prefab);
                    currentQueue.Enqueue(currentObject);
                }
                Registry.Add(pool.entityType, currentQueue);
            }
            print("POOLER: pools generated");
        }
    }
}

EDIT: EntityType is just an enum I created for a more verbose drop-down menu in the inspector. In brackeys’ example, he just uses a string and manually types each entry into the serialized field.

The actual optimization depends on the exact algorithm.
There is no general time-complexity shortcut for every algorithm that has nested loops.
In this case, I see no way to somehow teleport past the data, the data is there to be seen/read.
Thus you need to go through all the checkpoints, no corner cutting will let you cross the finish line any faster, without getting disqualified, if you get what I mean.

If you meant that sequential execution is somehow better than nested, it’s really not. There is no difference.

Try it visually, if you have for example three lists, A, B, and C, each having 3, 4, and 5 elements respectively, it doesn’t matter how exactly you visit them, as long as you visit all the elements:

i) A:0, A:1, A:2, B:0, B:1, B:2, B:3, C:0, C:1, C:2, C:3, C:4 (nested order)
ii) A:0, B:0, C:0, A:1, B:1, C:1, A:2, B:2, C:2, B:3, C:3, C:4 (sequential order)

as you can see, the amount of elements is the same, total iterations are unchanged.
what changes, however, is the relative distribution. I don’t know if that’s something you care about.

1 Like

@orionsyndrome yeah I found it finally, it was actually a udemy vid that related to counting the frequency of n input numbers by storing key/value pairs of n’s count in an object… --only applicable when doing counting or comparison operations. Shame.

So it likely does some sort of approximation or extrapolation to fill in the gaps, right?

This problem is very similar to that old story about Carl F. Gauss who was apparently a brilliant mathematician even at a very young age. His rural school elementary teacher was a bit like stereotypical old-timer, y’know, beating pupils with a stick and reading newspapers while the children do chores, like doing arbitrary calculations so that he could have some peace of mind. One day he set them out to add up all the numbers from 1 to 100, hoping it’ll take them a while, and obviously a 9 year old Gauss has raised his hand after he completed the task in only ten seconds. Pissed, but confident that the kid is either lying or got it wrong, the teacher brought him in front of the class to show this fantastic result to everyone, and well, to properly humiliate him, like any decent teacher should do.

Gauss noticed a peculiar pattern that emerged when he wrote down only a short sequence of the numbers.

1 2 3 4 … 97 98 99 100

He ordered the first and the last numbers in pairs, and figured out that they always lead to the same result
1+100 = 101
2+99 = 101
3+98 = 101
4+97 = 101
and so on

he correctly extrapolated from this that the last pair would have to be 50+51 = 101
and thus made an algorithmic observation leading him to a simple formula: 50*101
ending up with a result of 5050 in mere seconds

Legend says the teacher was so fascinated with the boy’s mind that from that point on he personally invested his time, reputation, and even money to let him enter prestige universities and achieve higher education.

Point being that you need to have some sort of arithmetic, logical, or some other mathematical predictor of data or its logistics, in order to shorten up data querying or skip it altogether. In a normally stochastic sample, this is usually almost entirely impossible or, in other words, poses a problem that is regarded as NP class (cannot be solved in a polynomial time, thus isn’t computable with our current computation technology, but can only be approximated to a degree). All algorithms bound to this rule, including a generic nested loop problem (see, for example, the halting problem, which is used as an introductory problem to computability theory, which was proven by Turing as unsolvable).

Therefore if you have finite data, but no observable pattern, the chances are that the shortest path is as simple as traversing through data points one by one, now whether you do it in a nested manner doesn’t really matter, because if you have 120 data points, you definitely need 120 data queries to be able to do something with it.

That said, O(n^2) sounds awfully connected to having nested loops and scary performance-wise, but it’s in fact about something else, and not about the nested loops in general. O(n^2) is particularly bad when you have n original elements, but then you need to make up an algorithm which has to evaluate all possible permutations of any n with any other n, for some reason (maybe you’re trying to find the largest value, so you cross-compare them), implementation of which typically ends up being nested, and thus O(n^2). Depending on the actual problem, there are algorithms and algorithms that can reduce this upper-bound time complexity by doing something rather clever, in various steps, with some selective buffers to skip computation and by using well-placed stable predictors etc. and ending up with O(n*log(n)), for example, which still isn’t quite as good as O(n) but is much better than O(n^2).

Depending on the problem, however, this can be as simple as Gauss’s 1 to N summing algorithm, which is arithmetically reduced to being just O(1) – or as hard as proving the Fermat’s last theorem.

1 Like

Here is the example. This is an O(n^2) naive solution to comparing two integer arrays to see if every integer in the first array is the square root of a value in the second. For it to return true, each value must have a corresponding squared value, the same amount of times it occurs… ie, if you input [1,2,3], then it will be true if the second array is [9, 1, 4]. However, if you input [1,2,1] for the first array and your second array is [4,4,1] it will be false, because while the second array has all the squared values of the first, they do not occur at the same frequency ie, you have two 4s, so you’d need two 2’s and one 1, rather than one 2 and two 1s.

if you are not familiar with javascript, you may think this is looping only once, however, the indexOf() is actually performing a loop, bring you into exponential time-complexity:

function same(arr1, arr2){
    if(arr1.length !== arr2.length){
        return false;
    }
    for(let i = 0; i < arr1.length; i++){
        let correctIndex = arr2.indexOf(arr1[i] ** 2)
        if(correctIndex === -1) {
            return false;
        }
        console.log(arr2);
        arr2.splice(correctIndex,1)
    }
    return true;
}

same([1,2,3,2], [9,1,4,4])

Now, the O(n) solution which uses key/value pairs (in the form of an object) to separate the loops:

function same(arr1, arr2){
    if(arr1.length !== arr2.length){
        return false;
    }
    let frequencyCounter1 = {}
    let frequencyCounter2 = {}
    for(let val of arr1){
        frequencyCounter1[val] = (frequencyCounter1[val] || 0) + 1
    }
    for(let val of arr2){
        frequencyCounter2[val] = (frequencyCounter2[val] || 0) + 1       
    }
    console.log(frequencyCounter1);
    console.log(frequencyCounter2);
    for(let key in frequencyCounter1){
        if(!(key ** 2 in frequencyCounter2)){
            return false
        }
        if(frequencyCounter2[key ** 2] !== frequencyCounter1[key]){
            return false
        }
    }
    return true
}

same([1,2,3,2,5], [9,1,4,4,11])

I used to encounter a lot of this stuff months ago doing leetcode, but I’m afraid its one of those use-it-or-lose-it type things (for me anyways). Interestingly a few days ago I was lurking around stackoverflow to find a solution to a similar problem where I was trying to get rid of a switch inside of a foreach when I found this post where a user describes a solution that uses actions: https://stackoverflow.com/questions/15139434/switch-inside-loops-impacting-performance

1 Like

yeah, well, these aren’t exactly proper Algorithms, in terms of being scientific pinnacles in computer science. they are, however, algorithms, that tend to be employed or reused by any decent programmer on the fly, but you’ll get there eventually with practice. it’s what this line of work is all about, because anyone can make a program that does this and that, but not every program is as responsive or as extensible or as maintainable as any other, and this tends to precipitate into the overall impression of it and the overall user experience.

yes, of course you want to monitor time-complexity in your routines, but I wouldn’t recommend remembering every single one of these tips and their concrete implementations, please don’t bother with it too much, because you’ll likely see thousands of these by the time you finish making anything. in my line of duty I’ve seen hundreds of true gems worth preserving, enough to fill up several books. but I had to stick to what I actually needed at the time, embiggening my general utility classes only when I’d discover or develop something on my own that has a general purpose value and is reusable enough, and implementing major tight and very specific algorithms only for particular things.

as you said “use-it-or-lose-it” that’s pretty much the way it works, you’re not alone.
(and btw forgive me if I assume too much about your actual knowledge, this is a tendency one is likely to have after some time spent on this forum; there are plenty of beginners and little to no actual programmers)

to be honest, there is just one general useful coding advice I can give you – optimize your code only when it truly fails at being performant. with time, you’ll train your internal assessment of how performant the code is without having to go venture into danger zones, and you’ll be able to nail the design before you introduce any slugfest that badly needs fixing or total redesign. but still, you never ever try to optimize too much, because it will backfire in ways that are hard to describe, most of which have to do with actually understanding the code that still hasn’t cooled down enough to be called final. thus over-optimization leads to losing enthusiasm, and this kills off productivity, and then this spirals downward until you never complete anything and not really learn anything from it. in a professional environment this is the quickest route to stresses that are hard to handle.

on a different end of the optimization spectrum, though, I am still embarrassed with one of my first forays into commercial software where I had a simple oversight that rendered my program practically useless when it was fed with the real world data. it happened to me twice, once because I didn’t know what to expect (and didn’t explore it properly), and the second time I was ignorant even though I had a feeling it was a wrong design choice. that was a long long time ago (almost 30 years ago), and the best takeaway was to learn from it and get better in understanding the general principles behind it, and not to remember what the exact solution was. it was turbo pascal in one instance and quick basic pro in another ffs, I can barely remember the syntax nowadays, let alone the actual commands used. but I do remember that in the worse of the two programs I’ve made a file be read sequentially instead of being randomly-accessed in some crucial places that happened to be a part of my hot path (another mistake and then these two things compounded).

this major slowdown then exploded with large files which I haven’t actually tested, my third and probably the biggest mistake.

O(n^2)

our brains tend to underestimate how exponents works. this is why this is important. when you have 100 elements everything is a breeze (10K iterations), but good luck with having 10K of them (100M iterations).

I can read javascript because I was in actionscript for 10+ years.
you definitely can’t apply anything like that to list/queue nested pool as shown in your first post.
boy I wish you could, maybe with quantum computing, who knows, I’m told these things explore parallel realities in real time :slight_smile:

1 Like

@orionsyndrome You are correct in assuming I’m new to programming, though I had limited experience with C/C++ back in high school. Starting in late Feb. I took an unfortunate covid layoff and turned it into a foray back into coding, learning JS, node, express, RESTful api, etc. as well as diving headfirst into C# with unity. I did a short course with some x64 assembly just to remind myself what data really is, and did a few things with python, namely a very basic blockchain as my original goal was to jump into the DeFi hype once I got a portfolio going.

I catch myself making the mistake you mentioned all the time and you are right… it is a major slowdown for me. I obsess over doing things “the right way” so much sometimes that I can’t get over my analysis paralysis long enough to produce lol. A lot of it comes from the fact that the grandiosity of what I often want to accomplish doesn’t line up with the grandiosity(or lackthereof) of my knowledge and experience. The first real program I ever coded (in this decade) was a theme editor for VSCode in Electron/JS/Node and boy was I in over my head big time on that one. I ended up hacking together something that looked nice and worked (in a very basic sense). The mistake I made on that one was overengineering it from the get-go. I kept worrying about scalability and modularity so much that it ended up having the opposite effect… the more “modular” I made it, the more difficult it was to interpret and maintain. Now its just a bowl of spaghetti and meatballs. But! I did learn a ton from that experience, namely asynchronous code and it caused me to pick up Head First Design Patterns, which I think has helped me a lot.

Thanks for sharing your story. Makes me feel good to know that failure happens sometimes even in the professional realm. And yeah, I kinda let the whole leetcode thing fall to the wayside. As much as I loved learned about whacky, weird, and effective ways to traverse trees and manipulate data structures, I just wasn’t really retaining any of that in the form of practical knowledge. Might help with a job interview, but so far I’ve used very little if anything from that realm in actual architecture. You can tell I’m chomping at the bit to use some of it though, which is why I made this thread. Lol I’m like “YES. 'member that loop splitty thing I saw on udemy!? maybe I can use that thing here!!”

Hello all. In the OP it looks like the code is for the initialisation phase of the pool, i.e. a single operation carried out once during the setup of your game (or level loading, etc). The whole point of the object pool is to carry out this expensive operation before the time critical phase of actually running the game. The instantiate operation for the game objects is likely to be much more expensive than the surrounding loop code.

3 Likes

Yeah well, I was a noob and it was a hardcore noob mistake. I’m not afraid to admit it.

I can still do it :slight_smile: The world of programming got very complicated in the meantime, so it got easier to be a noob again.
But still once you adopt these basic principles you can at least identify the potential issues and write a short TODO for a future self who doesn’t mind digging for an algorithmic gem, do benchmark tests, and truly concentrate on the issue at hand as if it was a project in itself.

Learn how to soar on the same level of abstraction for as long as you can muster, and get things done first, is all I’m saying. And I’m probably the last person who should say this to anyone, so I’m probably talking to myself first and foremost.

Just ignore the micro implementation most of the time, focus on the broad design and pull out any mechanical threads that can be called features when you’re done (even if they’re buggy). Flesh out the complete structure of your app/game as quickly as possible.

From my experience, there is a very fruitful threshold that I’m calling a mid-time in dev, you want to hit that asap. We are all usually fixated on the early conception phase, and even though the blank files are thrilling, there is a lot of hit and miss in the beginning, things aren’t as clear, and there is a gap between this early time and mid-time where enthusiasm normally crashes, because you have to play dirty to bridge the two. Just do it. This is where a professional environment sometimes helps (contrary to what it normally does to any enthusiasm), because it gives you a strong push forward many programmers cannot easily muster otherwise. This is what produces vaporware (most of the time anyway).

There is no game dev in the world who didn’t have this gap of enthusiasm, so don’t fret, you’re not alone. But try to escape the vicious cycle early on.

1 Like