ML-Agents v1.0 and Roadmap Update (Mar 19, 2020) - Discussion Thread

Hi all,

Please see release notes here on what was released in v0.15.0. This is a release candidate for our upcoming v1.0 release. ML-Agents v1.0 release is targeted for mid-April.

What does it mean to be 1.0?

We want ML-A to be production-worthy for implementing in real games. This means stable APIs and thoroughly tested to work with LTS versions of Unity. We are working to provide the ML-Agents C# toolkit as a package in the Unity Package Manager. This implies that the package will be supported throughout the LTS lifecycle of the specific editor version.

Going forward, studios should feel comfortable to lock in on any major released version of ML-A since these versions will be properly supported. We will announce more details via a Unity blog post closer to the release of 1.0.

Additionally, our philosophy at ML-A has always been to support open source, so our researchers, game developers, and innovators can continue to push the boundaries of deep learning and AI. ML-A will continue to be open-source so anyone can extend ML-A to suit their needs.

With that, WE NEED YOUR FEEDBACK ON v0.15! ML-A 1.0 is a culmination of the feedback provided over the years. Since 0.15 is our release candidate for 1.0, we want to make sure this would be suitable for production. Since we have stabilized the public API for the C# components, v0.15 is our release candidate for 1.0. We want to make sure this would be suitable for production

ML-Agents 3 Month Roadmap as of 3/18/2020 (we are currently prioritizing feedback for 1.0)

  • v0.15 RC feedback and changes needed for 1.0 release

  • Additional refactoring of the Unity SDK (C#) codebase

  • Additional refactoring of the Trainers (Python) codebase

  • Explicit definition and documentation of the public C# APIs

  • Release of ML-Agents Unity preview package (Object-Oriented, Mono-behavior)

  • Enable multi-agent training for more scenarios, such as asymmetric or non-adversarial games

  • Multiple improvements to the UI editor, configuration, and training workflow

  • Additional example environments

Previous roadmap updates:

Jan 21, 2020
Feb 14, 2020

Thanks and please continue to provide us valuable feedback.

Jeff

P.S. - Please see our guidelines on where to route issues.

3 Likes

:smile:

Congratulations to the team on such constant rapid improvements, and on a job well done so far! (you guys put out updates like they're going out of style!)

I'm looking forward to migrating everything into sensor classes for better organization. My ultimate goals will involve very highly dimensional and multi-modal observation spaces, and the existing iSensors are already starting to make that easier to manage.

Regarding C# refactoring, I'm not sure there's too much room for improvement in terms of simplicity, but one thing that might be useful is to include some optimized array functions for c# side data handling.

Lately I have become concerned about the scalability of visual observations. Deploying in-game cameras involves aspects of rendering that I don't at all understand. If possible, some guidance on best practices when using cameras (how to make rendering cheaper and efficient) would be very helpful (perhaps some options on the iSensor camera component that can override the camera object settings to ensure speed at the cost of detail or effects).

It would also be excellent if some generic filters could be made optional in the camera sensor (such as a depth based grey-scale). Using visual observations is one of the more intriguing parts of ml-agents, but it is also difficult to get right, and resource intensive if not done efficiently.

1 Like

In older versions of ML-Agents it was possible to access the observations in the (back then: ) "Heuristic Brain".

In 0.14 I had to make m_Observations in VectorSensor public, in order to make heuristic decions based on obervations.
Is there a more elegant way than this? The examples all seem to use direct controller-input.

I would find it useful to have access to those observations in the first release candidate again. If I'm just missing something here: disregard everything I said ;-)

There are online video tutorials for ML Agents, but they place Academy and Brain objects on the scene attached as MonoBehaviours. They also discuss various Brain types, which is no longer the case. This also happens on blog posts in Unity blog.

I think documentation should mention such things where appropriate, e.g.:

• Starting with version 0.14, you no longer place Academy onto the scene. Instead, it is now a singleton, which only can be accessed via code.

• Starting with version 0.11, you no longer create Brains as either ScriptableObjects or attached scripts. Instead, you attach Behavior Parameters component to an agent and give them a name. If you want for several agents to share a brain, give Behavior Paramters the same name.

This was really confusing when I first started using ML Agents.

1 Like

Yay! Since 0.3 I was looking forward to 1.0 release. Really excited for this.

I would love to see more attention given to the onboarding process of unity-ml users, especially if we assume you are not targeting RL experts. Some ideas could be a list of "gotchas". Things like, if you only give the position information every frame, the agent can not figure out the velocity, because it doesn't have a sense of memory. (I see this pretty often especially in games that don't use rigid bodies) Or how to write code that stays stable when the simulation speed increases. That is usually forgotten and causes things to go badly.

Is the team looking to support using different learning backend as a first-class feature? That is to say, the env runs through unity-ml but the neural network training is done in python written by some other entity. I always find myself having to write some code to the unityml library base to get this workflow to function, but I also understand this is an edge case.

I also usually expose more of the variables on the C# side of things, I don't recall what exactly but I think it was related to rewards, or steps taken. (might not be either, but what I was looking for wasn't too exotic either)

Finally, I think just general stability and maybe a bit more helpful error messages. In my last pass of the library, I had issues related to this and this. Some might be user error, some might be genuine bugs, but the cases where you anticipate a user error pointing to what might be the problem would be super useful.

I would be down if you are looking to have a more official qualitative data collection session from your users.

Overall, thank you for work! Unity ML got me into reinforcement learning and now I am an addict!

Apparently, 80% or more of the performance in neural networks comes from 20% or less of the nodes and synapses (in ml). Some architectures exploit this for efficiency at deployment time by finding ways to atrophy and "drop out" the unnecessary connections.

Another similar function can be used to harden the weights of the important synapses (instead of dropping our the rest), such that transfer learning wont destroy the initially learned task.

Dropout could be especially useful for making very effective and efficient networks for deployment....

Just food for thought!

This is awesome! The ML-A framework is simply amazing, thank you!

Wish it was more easy to run this ML agents on enviroments such as docker and google collab


hey - could you elaborate on what other exposing of C# variables you would look to expose?

I just merged https://github.com/Unity-Technologies/ml-agents/pull/3825 which will be in the next release. You can now call Agent.GetObservations() to get a read-only list of the observations that were made in CollectObservatoins.

2 Likes

Hey I've been using ML Agents since it was first released, the progress has been awesome and many of the newest features are helpful. I think there are a handful of things that would make it easier to use:

  • counting how many vector observations can be tedious when making iterative changes to an agent, the warnings about expecting a number of observations and receiving another number are helpful, but if this could be more automatic it would be helpful.

  • I use this Unity package Squiggle to better understand the observations I'm sending my agent, https://assetstore.unity.com/packages/tools/utilities/squiggle-21970 without this I feel like development would be extremely more difficult. Some kind of native tool to debug both the inputs and outputs to the network would be really great.

  • Maybe this is already a feature and I've somehow overlooked it but it would be great if there was a way to log custom data into the TensorBoard without getting into the python API, for example, logging the average number of goals scored in a game or some other environment specific diagnostic. My biggest time sink when developing with ML Agents is training environments that are seemingly performing well according to the training statistics, but not behaving as intended. Having more insights like this during training could help communicate something has gone wrong to the developer and allow them to stop training and fix the problem before sinking 12-24 hours into training.

  • More error checking in general would be great, caching NAN observations or rewards is good but if you accidentally divide by zero think it still accepts Infinity as an observation , then eventually the network starts outputting NAN.

1 Like

They added a function to count the number of observations in 1.0


Thanks for the feedback Sterling!

For #1, #3, #4 - we've addressed in ML-Agents Release 1 (last Thursday) or in previous releases. For #2 - we'd be interested to chat with you more on it, DM me if interested.

Also - from @celion_unity

“counting how many vector observations…“ - could write a custom ISensor so that he wouldn’t have to adjust the observation count in the UI. we haven't spec'ed out attribute-based sensors, which would help, but can also chat about that.

"more error checking ... NAN" - we check for NaN and Inf on observations and rewards in debug mode, and raise an exception if we find them: https://github.com/Unity-Technologies/ml-agents/blob/eedc3f9c052295d89bed0ac40a8e82a8fd17fead/com.unity.ml-agents/Runtime/Utilities.cs#L79