I’m training a virtual robot using only visual observations, everything was going as expected in the training phase until I tried to use imitation learning as it explains here: https://blogs.unity3d.com/es/2019/11/11/training-your-agents-7-times-faster-with-ml-agents/
It is true that the robot learns what to do faster than regular learning, but the memory used by the trainer grows linearly. If I let the trainer run 4-5 hours, python reach 13gb of RAM.
Here’s the config file :
behaviors:
Robot_Imitation:
trainer_type: sac
hyperparameters:
learning_rate: 0.0003
learning_rate_schedule: constant
batch_size: 64
buffer_size: 500000
buffer_init_steps: 1000
tau: 0.01
steps_per_update: 10.0
save_replay_buffer: false
init_entcoef: 0.01
reward_signal_steps_per_update: 10.0
network_settings:
normalize: false
hidden_units: 256
num_layers: 2
vis_encode_type: simple
reward_signals:
extrinsic:
gamma: 0.99
strength: 2.0
gail:
gamma: 0.99
strength: 0.02
encoding_size: 128
learning_rate: 0.0003
use_actions: true
use_vail: false
demo_path: Demos/RobotDemo.demo
keep_checkpoints: 5
max_steps: 10000000
time_horizon: 128
summary_freq: 10000
threaded: true