I’m 99% sure it uses some sort of hashing. Each item has an ID, each slot has an ID, you make sure the hashing space is large enough and make sure to use a well-distributed hashing function to minimize collisions. The only other issue with this are the symmetrical recipes and “loose” recipes, for which I’m sure they just preload them with mirror flags, or simply discard the slot IDs.
Of course, of course. Maybe Mojang complicated all of this in the meantime, and who knows what else mods do, but I strongly believe that the original recipe-solver by Notch and Jens was as simple as possible. Maybe even such a primitive solution, like serializing recipes into flat strings, then producing string maps (it was Java after all), it’s all still driven by hashes, but you stop caring about the actual hashes, you kind of relay that to the language.
Wait what was the question again?
I was like you at some point, I made horrible stuff when I was young, it’s ok, it’s a learning process.
But don’t be naive in approaching your software from a standpoint of a brute-force goblin. Sure, computers are fast, but you can’t be always blunt about it. Make use of memory, make use of clever solutions. Think ahead. You want your recipes to work blazing fast. You don’t want to read a database of an ever-growing list of recipes every time the player tinkers with crafting.
If hashing sounds too horrible for you, try thinking in terms of simple connectivity graphs. As bunny said, you can design your system around culling your recipes quickly every time an item is inserted. Because that should indeed narrow down the search space significantly. Or you can attempt to do this with the strings as I mentioned above, that’s actually a decent solution for a beginner.
I made a text viewer for DOS once (that was in the 90s). It was pretty and brilliant, and colorful, and even had an animated ASCII chrome… Really neat and worked really well. Well, it actually didn’t work well past a certain file size (that I haven’t even considered to test properly), because I was constantly reading the file sequentially (due to lack of experience at that time with all the various ways you could access the file on a disk), and everything was fine below a certain size NOT because the language was great for optimizing my crappy code, but because the OS itself had a read cache memory. This was called smartdrive or something like that, you had to turn it on and it significantly impacted the longevity and speed of the drive, and I completely forgot about it (because it would boot with the system through so called autoexec, good old times), I was too busy thinking how awesome my reader was, and neglected its key functionality.
The fix was relatively simple, but I got somewhat embarrassed regardless. These twists in how we think and approach our solutions make the world of difference in the perceived quality of the final product. If only I explored my options a bit further, tested ludicrous scenarios and file sizes, understood what was actually going on and how much a random access would help with reading exactly what was necessary, instead of wasting time with the invisible stuff, reading potentially huge text files that were supposed to be in someone else’s hands. The way it scaled was similar to a Porsche doing 0-60 mph into a brick wall. So I’d compare that design approach to “blind Porsche driver”.
Computers are not magic. They actually shift stuff around and particularly bad instruction combinations will make them choke quite often, especially today with super tight hardware that is ever so consolidated around very specific “industry-standard” paths of optimal performance. People are sometimes obsessed with shaving every little microsecond from their logic, math and updates, yet the issues lie in the extremely bad overall design choices for their business core. You can’t micromanage that later, because it’s all over the place.