For doing reinforcement learning on a board game (which has specific connected fields), I’d like to include some kind of convolutions for my network. I saw that you can specify this by selecting an appropiate vis_encode_type in the training config file, but I’m wondering if this is only going to work for visual observations (e.g. Camera/Texture sensors)?
I find it easier to encode the board state myself and feel like using a camera sensor would be way too much here, perhaps even misleading.
So the question is: can I still use convolutions for simple observations (List of floats)?
Additionally, I’d like to encode multiple channels (one for each type of game piece), how can I do this in the CollectObservations?