Hi,
I wonder if anyone tried to wrap their OWN environment in order to work with unity.
Which means they wrote a wrapper using BaseEnv.
I have seen the GymToUnity wrapper in pull requests but obviously it is very simplistic and is not using the full capabilities of self-play, teams, gail and so…
any thoughts, recommendations, experiences?
Thanks
@ShirelJosef can you elaborate more on your use case please? Are you using your own environment or your own trainer?
You can follow this guid to integrate ml-agent with a new environment: ml-agents/docs/Learning-Environment-Create-New.md at main · Unity-Technologies/ml-agents · GitHub
Yes, the capabilities of the Gym wrapper are limited. If you’re planning to use your own implementation of RL, have you looked at the Low-level API?
Reading your question again, I guess you’re referring to this PR. Unfortunately, it’s not priority on our road map to support arbitrary environment with ML-agents.
1 Like