I am trying to setup up an environment where two agents use self-play to train.
This is a turn-based soccer game where an agent can move one of its players a chosen distance and direction each turn. I have two agents, each of which have control over all of their team’s players.
When running mlagents-learn
I get the following logs
-
INFO [environment.py:265] Connected new brain: Soccer?team=2
-
INFO [stats.py:130] Hyperparameters for behavior name Soccer:
trainer_type: ppo
hyperparameters:
batch_size: 2048
buffer_size: 20480
learning_rate: 0.0003
beta: 0.005
epsilon: 0.2
lambd: 0.95
num_epoch: 3
learning_rate_schedule: constant
network_settings:
normalize: False
hidden_units: 64
num_layers: 2
vis_encode_type: simple
memory: None
reward_signals:
extrinsic:
gamma: 0.99
strength: 1.0
init_path: None
output_path: results\test\Soccer
keep_checkpoints: 5
max_steps: 50000000
time_horizon: 1000
summary_freq: 10000
threaded: True
self_play:
save_steps: 50000
team_change: 200000
swap_steps: 50000
window: 10
play_against_latest_model_ratio: 0.5
initial_elo: 1200.0
behavioral_cloning: None -
INFO [environment.py:265] Connected new brain: Soccer?team=1
-
WARNING [env_manager.py:109] Agent manager was not created for behavior id Soccer?team=1.
I am not sure what is causing the Agent manager warning. When running the SoccerTwos example, I do not get this warning, but I am having a hard time figuring out the difference between my environment and the SoccerTwos environment.
Version information:
ml-agents: 0.17.0.dev0,
ml-agents-envs: 0.17.0.dev0,
Communicator API: 1.0.0,
TensorFlow: 2.2.0
My Unity Version is 2019.1.0f2.
Has anyone else run into this issue or have any ideas how to solve it?
I’ve seen this issue on the ml-agents GitHub, but there doesn’t seem to be a resolution yet.
Any help would be appreciated!