How to add camera to the soccer game?

Hi,

I am playing around with the soccer game. I replaced the ray sensor with camera:

This is my training configuration:

behaviors:
  SoccerTwosVisual:
    trainer_type: ppo
    hyperparameters:
      batch_size: 64
      buffer_size: 1024
      learning_rate: 0.0003
      beta: 0.005
      epsilon: 0.2
      lambd: 0.95
      num_epoch: 3
      learning_rate_schedule: linear
    network_settings:
      normalize: true
      hidden_units: 256
      num_layers: 2
      vis_encode_type: resnet
    reward_signals:
      extrinsic:
        gamma: 0.8
        strength: 1.0
    keep_checkpoints: 5
    max_steps: 50000000
    time_horizon: 1000
    summary_freq: 10000
    threaded: false
    self_play:
      save_steps: 50000
      team_change: 200000
      swap_steps: 2000
      window: 10
      play_against_latest_model_ratio: 0.5
      initial_elo: 1200.0

However I am getting the following error:

Traceback (most recent call last):
  File "/usr/local/bin/mlagents-learn", line 33, in <module>
    sys.exit(load_entry_point('mlagents==0.24.0.dev0', 'console_scripts', 'mlagents-learn')())
  File "/usr/local/lib/python3.8/dist-packages/mlagents-0.24.0.dev0-py3.8.egg/mlagents/trainers/learn.py", line 274, in main
    run_cli(parse_command_line())
  File "/usr/local/lib/python3.8/dist-packages/mlagents-0.24.0.dev0-py3.8.egg/mlagents/trainers/learn.py", line 270, in run_cli
    run_training(run_seed, options)
  File "/usr/local/lib/python3.8/dist-packages/mlagents-0.24.0.dev0-py3.8.egg/mlagents/trainers/learn.py", line 149, in run_training
    tc.start_learning(env_manager)
  File "/usr/local/lib/python3.8/dist-packages/mlagents_envs-0.24.0.dev0-py3.8.egg/mlagents_envs/timers.py", line 305, in wrapped
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/mlagents-0.24.0.dev0-py3.8.egg/mlagents/trainers/trainer_controller.py", line 172, in start_learning
    n_steps = self.advance(env_manager)
  File "/usr/local/lib/python3.8/dist-packages/mlagents_envs-0.24.0.dev0-py3.8.egg/mlagents_envs/timers.py", line 305, in wrapped
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/mlagents-0.24.0.dev0-py3.8.egg/mlagents/trainers/trainer_controller.py", line 230, in advance
    new_step_infos = env_manager.get_steps()
  File "/usr/local/lib/python3.8/dist-packages/mlagents-0.24.0.dev0-py3.8.egg/mlagents/trainers/env_manager.py", line 112, in get_steps
    new_step_infos = self._step()
  File "/usr/local/lib/python3.8/dist-packages/mlagents-0.24.0.dev0-py3.8.egg/mlagents/trainers/subprocess_env_manager.py", line 264, in _step
    self._queue_steps()
  File "/usr/local/lib/python3.8/dist-packages/mlagents-0.24.0.dev0-py3.8.egg/mlagents/trainers/subprocess_env_manager.py", line 257, in _queue_steps
    env_action_info = self._take_step(env_worker.previous_step)
  File "/usr/local/lib/python3.8/dist-packages/mlagents_envs-0.24.0.dev0-py3.8.egg/mlagents_envs/timers.py", line 305, in wrapped
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/mlagents-0.24.0.dev0-py3.8.egg/mlagents/trainers/subprocess_env_manager.py", line 378, in _take_step
    all_action_info[brain_name] = self.policies[brain_name].get_action(
  File "/usr/local/lib/python3.8/dist-packages/mlagents-0.24.0.dev0-py3.8.egg/mlagents/trainers/policy/torch_policy.py", line 207, in get_action
    run_out = self.evaluate(decision_requests, global_agent_ids)
  File "/usr/local/lib/python3.8/dist-packages/mlagents_envs-0.24.0.dev0-py3.8.egg/mlagents_envs/timers.py", line 305, in wrapped
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/mlagents-0.24.0.dev0-py3.8.egg/mlagents/trainers/policy/torch_policy.py", line 173, in evaluate
    action, log_probs, entropy, memories = self.sample_actions(
  File "/usr/local/lib/python3.8/dist-packages/mlagents_envs-0.24.0.dev0-py3.8.egg/mlagents_envs/timers.py", line 305, in wrapped
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/mlagents-0.24.0.dev0-py3.8.egg/mlagents/trainers/policy/torch_policy.py", line 135, in sample_actions
    actions, log_probs, entropies, memories = self.actor_critic.get_action_stats(
  File "/usr/local/lib/python3.8/dist-packages/mlagents-0.24.0.dev0-py3.8.egg/mlagents/trainers/torch/networks.py", line 500, in get_action_stats
    action, log_probs, entropies, actor_mem_out = super().get_action_stats(
  File "/usr/local/lib/python3.8/dist-packages/mlagents-0.24.0.dev0-py3.8.egg/mlagents/trainers/torch/networks.py", line 303, in get_action_stats
    encoding, memories = self.network_body(
  File "/usr/local/lib/python3.8/dist-packages/torch-1.7.1-py3.8-linux-x86_64.egg/torch/nn/modules/module.py", line 727, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/mlagents-0.24.0.dev0-py3.8.egg/mlagents/trainers/torch/networks.py", line 87, in forward
    processed_obs = processor(obs_input)
  File "/usr/local/lib/python3.8/dist-packages/torch-1.7.1-py3.8-linux-x86_64.egg/torch/nn/modules/module.py", line 727, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/mlagents-0.24.0.dev0-py3.8.egg/mlagents/trainers/torch/encoders.py", line 270, in forward
    before_out = hidden.view(batch_size, -1)
RuntimeError: view size is not compatible with input tensor's size and stride (at least one dimension spans across two contiguous subspaces). Use .reshape(...) instead.

My guess would be that something is wrong with my camera, how can I fix it?

Regards,
RUn

Hi, I think you found a bug on master. I tried to make a fix on this branch : fix-contiguous-resnet (https://github.com/Unity-Technologies/ml-agents/tree/fix-contiguous-resnet) Do you think you can try this branch out and tell us if your problem is resolved ?

Thanks. This solves the problem.
Can two camera sensor added for an Agent?

Can two camera sensor added for an Agent?

Yes, although there is no demo environment to show it. Please do let us know if it does not work for you.