Hey!
I am training agents using neat-python to be used in Unity with MLAgents. The issue is that I have not found a way to convert the feedforward network (let alone the recurrent network) in NEAT.
I have tried extensive googling, checking github and interrogating chatGPT without luck.
Does anyone have any experience in performing such an conversion?
Do you mean, you’re not using mlagents to train the agent, but only to run inference?
In that case, you might want to use pure Barracuda, Introduction to Barracuda | Barracuda | 1.0.4 (unity3d.com)
Yeah, i have already trained the agent. I have a trained neat.FeedForwardNetwork which does not seem directly comparable to TensorFlow networks. I only want to run inference with this network.
From what i just read about Barracuda it seems like documentation covers the usual neural net libraries like PyTorch, keras and TF.
Does barracuda have support for such proprietary network types like NEAT?
Yup. Because NEAT is not a type of network: it’s a type of trainer.
Other types of trainer include PPO, which mlagents can use, or simply using REINFORCE.
The output of a trainer is a trained neural network that you can plug in for inference.
Note that mlagents doesn’t support inference on models not trained with mlagents. I doubt that’s an insurmountable obstacle, especially given that mlagents is 100% opensource, but I reckon, if it was me, I’d just go with Barracuda.
Barracuda doesn’t cover pytorch, keras or TF. It has no awareness of any of these. Barracuda covers: ONNX. Or, a subset of ONNX. Here are the operators it covers: Supported ONNX operators | Barracuda | 1.0.4 (unity3d.com)
Here are some network architectures that work with Barracuda:
Supported neural architectures and models | Barracuda | 1.0.4 (unity3d.com)
Note that you might need to map your NEAT network into the equivalent dense network, potentially.
For example, imagine you have a very simple NEAT network like:
input I:
a
b
c
output O:
d = a * 0.5 + b * 2
e = c
So, this maps to the following matrix multiplication:
O = WI
where:
I = (a)
(b)
(c)
O = (d)
(e)
W = (0.5 2 0)
(0 0 1)
Ah I see, i was hoping there existed some library or methods that took the dirty work of translating the network structure in the FeedForwardNetwork by NEAT into onnx.
I suppose going through the network iteratively and building the ONNX network by corresponding operators is my only choice. As the FeedForwardNetwork in NEAT keeps track of inputs / outputs per node and not having a layer specific structure it may be a bit tricky but should be possible.
As the FeedForwardNetwork in NEAT keeps track of inputs / outputs per node and not having a layer specific structure it may be a bit tricky but should be possible.
Oh, right, good point. Hmmm. Yeah, you might need to create one dense layer per NEAT node… Each node will have an output size of one. You can concatenate the outputs of various nodes, using Concat
or similar, perhaps, and then feed them to the next dense mode.
Just out of personal curiosity, what is attracting you to use NEAT over back-prop?
Ah I see, i was hoping there existed some library or methods that took the dirty work of translating the network structure in the FeedForwardNetwork by NEAT into onnx.
I haven’t actually googled around to see if there is, to be clear. And someone else might know of one. If there isn’t, then there’s an opportunity for you to make one
I think a layer per node and then processing them all together might be a good solution.
Making my own library for this would be pretty cool, i hope i can figure it out!
The reason i am going for NEAT is because i am doing a explorative study, where i am comparing agents trained with NEAT against other algorithms to assess the effects on the player.
Lastly it was a really interesting concept, intriguing me to learn and work with it. Reinforcement Learning would/will probably perform pretty well in the environment i am working with, as its of a more complex nature.