For my final year project, I’m working on an implementation of multitasking for multiple RL agents using ML-Agents. My goal is to produce three different results: single agent performing multiple tasks, multiple agents performing one task, multiple agents performing multiple tasks.
I have faced difficulties in implementing the concept of agents performing multiple tasks in Unity, I’m trying to find a way to implement it either from scratch or if anyone has any idea on how to tackle this problem (or even better, has an already built example of multitasking in Unity), any help or advice is much appreciated.
The current ML-Agent toolkit doesn’t have explicit algorithm support for multi-task training, but multi-task learning is one of the things that’s on our road map and we hope that we can support it soon.
Here are some directions you can try out with the current toolkit, while there’s definitely better approach that requires more effort:
Input a signal representing current task as observation and train the agent with different reward function based on the task.
Make multiple brains and train a one brain for each task. And then when running inference, choose the brain to use according to the current objective.
We had something similar for our final year project. We implemented multi-objective reinforcement learning. We managed to train the brain with weights so the behaviour could be changed between objectives without retraining. I need to redo this video but here is the link for the video