Hey all,
I’m able to train using the standard procedure but now I want to use multiple Unity instances for the RollerBall example. When I attach my debugger to the mlagents.trainers.learn script (from PyCharm) I get “torch has no attribute set_num_threads” and found a github issue here: Trying to use the learn.py script from the command line throws a "module 'torch' has no attribute 'set_num_threads'" error if PyTorch is not installed · Issue #4526 · Unity-Technologies/ml-agents · GitHub. Is there another way to attach a debugger?
Thanks, specifics below in case anyone is familiar with the problem.
Version information:
Unity version: 2019.4.21f1
ml-agents: 0.24.0,
ml-agents-envs: 0.24.0,
Communicator API: 1.4.0,
PyTorch: 1.7.0
command:
python -m mlagents.trainers.learn config/rollerball_config.yaml --run-id=RollerBall3 --num-envs=2 --env=$UNITY_PATH --force
Traceback (most recent call last):
File “/home/joe/Documents/tools/anaconda3/envs/shade/lib/python3.8/runpy.py”, line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File “/home/joe/Documents/tools/anaconda3/envs/shade/lib/python3.8/runpy.py”, line 87, in _run_code
exec(code, run_globals)
File “/home/joe/Documents/ml-agents/ml-agents/mlagents/trainers/learn.py”, line 255, in
main()
File “/home/joe/Documents/ml-agents/ml-agents/mlagents/trainers/learn.py”, line 250, in main
run_cli(parse_command_line())
File “/home/joe/Documents/ml-agents/ml-agents/mlagents/trainers/learn.py”, line 246, in run_cli
run_training(run_seed, options)
File “/home/joe/Documents/ml-agents/ml-agents/mlagents/trainers/learn.py”, line 125, in run_training
tc.start_learning(env_manager)
File “/home/joe/Documents/ml-agents/ml-agents-envs/mlagents_envs/timers.py”, line 305, in wrapped
return func(*args, **kwargs)
File “/home/joe/Documents/ml-agents/ml-agents/mlagents/trainers/trainer_controller.py”, line 197, in start_learning
raise ex
File “/home/joe/Documents/ml-agents/ml-agents/mlagents/trainers/trainer_controller.py”, line 173, in start_learning
self._reset_env(env_manager)
File “/home/joe/Documents/ml-agents/ml-agents-envs/mlagents_envs/timers.py”, line 305, in wrapped
return func(*args, **kwargs)
File “/home/joe/Documents/ml-agents/ml-agents/mlagents/trainers/trainer_controller.py”, line 105, in _reset_env
env_manager.reset(config=new_config)
File “/home/joe/Documents/ml-agents/ml-agents/mlagents/trainers/env_manager.py”, line 68, in reset
self.first_step_infos = self._reset_env(config)
File “/home/joe/Documents/ml-agents/ml-agents/mlagents/trainers/subprocess_env_manager.py”, line 333, in _reset_env
ew.previous_step = EnvironmentStep(ew.recv().payload, ew.worker_id, {}, {})
File “/home/joe/Documents/ml-agents/ml-agents/mlagents/trainers/subprocess_env_manager.py”, line 98, in recv
raise env_exception
mlagents_envs.exception.UnityEnvironmentException: Environment shut down with return code 0.