Being a huge fan of fighting games, I’m really looking forward to play to the newcoming Absolver. Meaning while, I’m working on a AI for NPC in this kind of game. Instead of writing a very long list of “if - then”, and having already tried fuzzy logic, I chose Neural Network as a way to go. Also, I wanted to avoid creating a huge training set, so I thought about genetic algorithms.
It would have been foolish to tart on the real project, so currently, I’m just trying to have characters walk in a small labyrinth without touching walls, using a Neural Network that is subject to mutation.
I’m having some problems because, though sometimes they improve and their fitness goes up, they always tend to fail and the whole process is impacted.
Would anyone cared to have a look ? That would be fantastic !
Everything in the package. I’d gladly answer to any question !
Be generous with your names if you are handing out scripts to other people. Character counts in variable names don’t impact memory any more. Plus the build you uploaded throws null reference errors every time you run it. You have commented out the lines where you assign Creation.NPI and Creation.PH.
Anyway, on to the algorithm, genetic algorithms require a large number of individuals in each generation to get anywhere. Plus a large number of generations. I upped the number of individuals to 200. This immediately led to the problem of individuals walking in a circle on the spot. So I added a timer to kill off the whole generation after a while.
The next thing I looked at was mutation chance. There is something messed up here. On a value of 1 I get no variation between runs. On a value of 2 I get no memory between runs. I haven’t looked into why this is happening.
Anyway, I’ll have another play later. But those are my initial thoughts.
Okay, just dove into the code more, there are a couple other problems that will be preventing it from converging.
Each generation you are only grabbing a single individual. You should be randomly picking from the top quartile or so. This prevents you from getting stuck in a local minima.
You will also want to implement sex. While its not strictly needed for a genetic algorithm, it does dramatically improve the time to converge. There is a reason biological entities are so obsessed with sex. Combining sex and neural networks is challenging, but a quick google search should find a few techniques for it.
And because I’m rebelling against doing anything useful today, here is a completely working version. The critters can reach the end of the path in about six or seven generations. There were a bunch of little things I changed, but the overall structure is still intact. Have a look through and feel free to ask any questions.
In one of those ‘this is why I love neural networks’ moments, one set of critters actually learned to turn around once they reached the end of the path, and followed the path all the way back to the origin.
Woooow ! Awesome man ! You know, I had almost lost faith. I’m going to check this out, but in any case, this is definitely something that will prove itself useful ! Thank you very much, dear Mormon ! How come you know so much about genetics ?
When something catches my interest I tend to get carried away…
My degree is in biochemical engineering. So for a while there regular biological genetics was my bread and butter. Evolution through natural selection is what genetic algorithms are based on. I also did a bit of experimental work in some of my early games with neural networks.
Sure
The changes were primarily made to the Creation class.
I changed a few random ints to floats to give me finer control
I separated the range values for generation and mutation. (ie I replaced limRange with two variables)
I created a new class called NetValues that tokenised the important characteristics of the neural net. I made a few minor refactoring changes to BrainMachine to make this work
I added in a timer to kill a generation after a set time had passed. The timer increases with each successive generation. This prevents critters from turning on the spot and not ending the simulation. I did some minor refactoring to BrainMachine to make this work
I completely rewrote Selection(). It now grabs the tokenised neural nets of all critters that have more then 90% of the fitness of the best critter. This isn’t the best selection criteria ever, but its better then just grabbing the top one.
I rewrote Generate. It now will create one critter which is a direct clone of those selected above. It then fills up the rest with mutations of random critters selected above. I’m still using your mutation function unmodified.
I rewrote the fitness function (in BrainMachine). It now considers the distance from the origin, as well as the time survived.
There were a couple of bugs I solved late in the piece, that may or may not have been in your original code. These may have been more critical then the rest of the fixes.
Arrays were being passed to the new critters via reference, meaning multiple critters got the same network
Fitness wasn’t resetting each run
Critters were being given different initial starting positions
The other big changes were in the parameters on the Darwin GameObject
The number of Units was increased from 3 to 200. Its important to get lots of individuals in each generation
The mutate chance was dropped from 5 to 1. Values from 0.5 to 2 seemed to work well
The mutate range (previously lim range) was dropped from 100 to 20
And then some side effects that I changed, for no real reason.
Recap is now sent data for every generation, instead of only generations that improve. You generally want this anyway, so you can see when things plateau out.