Guys,
I created a parking lot example as a way to teach the community how ML-Agents work with Unity as shown on the video below but one of the challenges I am encountering is a way to have a dynamic list of parking spot goals during collect observations.
For instance, I may want to have one or more goals which positions I like to collect during collect observation, however I get errors if I add dynamic observations since the size of the observations is not a fixed number and I am not sure if there is a better way to do this?
Let me know thanks
As far as I know reinforcement learning algorithms require a fixed observation space. What you can do is have ‘empty’ observations that you fill in or remove dynamically as more or less goals need to be observed.
E.g. you have an observation space of 6 - 3 floats for target1, and 3 floats for target2.
If there is no target 2 just send zeros for those 3 float observations (null may work too, I’m not sure).
It’s not (yet) possible to do dynamic sized observations in ML-Agents - it will require a different neural network architecture, such as an LSTM on the input. But it’s not impossible =)