Search Unity

How to add camera to the soccer game?

Discussion in 'ML-Agents' started by unguranr, Jan 12, 2021.

  1. unguranr

    unguranr

    Joined:
    Jan 10, 2021
    Posts:
    5
    Hi,

    I am playing around with the soccer game. I replaced the ray sensor with camera:
    Screenshot from 2021-01-12 22-17-36.png

    This is my training configuration:

    Code (CSharp):
    1. behaviors:
    2.   SoccerTwosVisual:
    3.     trainer_type: ppo
    4.     hyperparameters:
    5.       batch_size: 64
    6.       buffer_size: 1024
    7.       learning_rate: 0.0003
    8.       beta: 0.005
    9.       epsilon: 0.2
    10.       lambd: 0.95
    11.       num_epoch: 3
    12.       learning_rate_schedule: linear
    13.     network_settings:
    14.       normalize: true
    15.       hidden_units: 256
    16.       num_layers: 2
    17.       vis_encode_type: resnet
    18.     reward_signals:
    19.       extrinsic:
    20.         gamma: 0.8
    21.         strength: 1.0
    22.     keep_checkpoints: 5
    23.     max_steps: 50000000
    24.     time_horizon: 1000
    25.     summary_freq: 10000
    26.     threaded: false
    27.     self_play:
    28.       save_steps: 50000
    29.       team_change: 200000
    30.       swap_steps: 2000
    31.       window: 10
    32.       play_against_latest_model_ratio: 0.5
    33.       initial_elo: 1200.0
    34.  
    However I am getting the following error:
    Code (CSharp):
    1. Traceback (most recent call last):
    2.   File "/usr/local/bin/mlagents-learn", line 33, in <module>
    3.     sys.exit(load_entry_point('mlagents==0.24.0.dev0', 'console_scripts', 'mlagents-learn')())
    4.   File "/usr/local/lib/python3.8/dist-packages/mlagents-0.24.0.dev0-py3.8.egg/mlagents/trainers/learn.py", line 274, in main
    5.     run_cli(parse_command_line())
    6.   File "/usr/local/lib/python3.8/dist-packages/mlagents-0.24.0.dev0-py3.8.egg/mlagents/trainers/learn.py", line 270, in run_cli
    7.     run_training(run_seed, options)
    8.   File "/usr/local/lib/python3.8/dist-packages/mlagents-0.24.0.dev0-py3.8.egg/mlagents/trainers/learn.py", line 149, in run_training
    9.     tc.start_learning(env_manager)
    10.   File "/usr/local/lib/python3.8/dist-packages/mlagents_envs-0.24.0.dev0-py3.8.egg/mlagents_envs/timers.py", line 305, in wrapped
    11.     return func(*args, **kwargs)
    12.   File "/usr/local/lib/python3.8/dist-packages/mlagents-0.24.0.dev0-py3.8.egg/mlagents/trainers/trainer_controller.py", line 172, in start_learning
    13.     n_steps = self.advance(env_manager)
    14.   File "/usr/local/lib/python3.8/dist-packages/mlagents_envs-0.24.0.dev0-py3.8.egg/mlagents_envs/timers.py", line 305, in wrapped
    15.     return func(*args, **kwargs)
    16.   File "/usr/local/lib/python3.8/dist-packages/mlagents-0.24.0.dev0-py3.8.egg/mlagents/trainers/trainer_controller.py", line 230, in advance
    17.     new_step_infos = env_manager.get_steps()
    18.   File "/usr/local/lib/python3.8/dist-packages/mlagents-0.24.0.dev0-py3.8.egg/mlagents/trainers/env_manager.py", line 112, in get_steps
    19.     new_step_infos = self._step()
    20.   File "/usr/local/lib/python3.8/dist-packages/mlagents-0.24.0.dev0-py3.8.egg/mlagents/trainers/subprocess_env_manager.py", line 264, in _step
    21.     self._queue_steps()
    22.   File "/usr/local/lib/python3.8/dist-packages/mlagents-0.24.0.dev0-py3.8.egg/mlagents/trainers/subprocess_env_manager.py", line 257, in _queue_steps
    23.     env_action_info = self._take_step(env_worker.previous_step)
    24.   File "/usr/local/lib/python3.8/dist-packages/mlagents_envs-0.24.0.dev0-py3.8.egg/mlagents_envs/timers.py", line 305, in wrapped
    25.     return func(*args, **kwargs)
    26.   File "/usr/local/lib/python3.8/dist-packages/mlagents-0.24.0.dev0-py3.8.egg/mlagents/trainers/subprocess_env_manager.py", line 378, in _take_step
    27.     all_action_info[brain_name] = self.policies[brain_name].get_action(
    28.   File "/usr/local/lib/python3.8/dist-packages/mlagents-0.24.0.dev0-py3.8.egg/mlagents/trainers/policy/torch_policy.py", line 207, in get_action
    29.     run_out = self.evaluate(decision_requests, global_agent_ids)
    30.   File "/usr/local/lib/python3.8/dist-packages/mlagents_envs-0.24.0.dev0-py3.8.egg/mlagents_envs/timers.py", line 305, in wrapped
    31.     return func(*args, **kwargs)
    32.   File "/usr/local/lib/python3.8/dist-packages/mlagents-0.24.0.dev0-py3.8.egg/mlagents/trainers/policy/torch_policy.py", line 173, in evaluate
    33.     action, log_probs, entropy, memories = self.sample_actions(
    34.   File "/usr/local/lib/python3.8/dist-packages/mlagents_envs-0.24.0.dev0-py3.8.egg/mlagents_envs/timers.py", line 305, in wrapped
    35.     return func(*args, **kwargs)
    36.   File "/usr/local/lib/python3.8/dist-packages/mlagents-0.24.0.dev0-py3.8.egg/mlagents/trainers/policy/torch_policy.py", line 135, in sample_actions
    37.     actions, log_probs, entropies, memories = self.actor_critic.get_action_stats(
    38.   File "/usr/local/lib/python3.8/dist-packages/mlagents-0.24.0.dev0-py3.8.egg/mlagents/trainers/torch/networks.py", line 500, in get_action_stats
    39.     action, log_probs, entropies, actor_mem_out = super().get_action_stats(
    40.   File "/usr/local/lib/python3.8/dist-packages/mlagents-0.24.0.dev0-py3.8.egg/mlagents/trainers/torch/networks.py", line 303, in get_action_stats
    41.     encoding, memories = self.network_body(
    42.   File "/usr/local/lib/python3.8/dist-packages/torch-1.7.1-py3.8-linux-x86_64.egg/torch/nn/modules/module.py", line 727, in _call_impl
    43.     result = self.forward(*input, **kwargs)
    44.   File "/usr/local/lib/python3.8/dist-packages/mlagents-0.24.0.dev0-py3.8.egg/mlagents/trainers/torch/networks.py", line 87, in forward
    45.     processed_obs = processor(obs_input)
    46.   File "/usr/local/lib/python3.8/dist-packages/torch-1.7.1-py3.8-linux-x86_64.egg/torch/nn/modules/module.py", line 727, in _call_impl
    47.     result = self.forward(*input, **kwargs)
    48.   File "/usr/local/lib/python3.8/dist-packages/mlagents-0.24.0.dev0-py3.8.egg/mlagents/trainers/torch/encoders.py", line 270, in forward
    49.     before_out = hidden.view(batch_size, -1)
    50. RuntimeError: view size is not compatible with input tensor's size and stride (at least one dimension spans across two contiguous subspaces). Use .reshape(...) instead.
    51.  
    My guess would be that something is wrong with my camera, how can I fix it?

    Regards,
    RUn
     
  2. vincentpierre

    vincentpierre

    Joined:
    May 5, 2017
    Posts:
    160
  3. unguranr

    unguranr

    Joined:
    Jan 10, 2021
    Posts:
    5
    Thanks. This solves the problem.
    Can two camera sensor added for an Agent?
     
  4. vincentpierre

    vincentpierre

    Joined:
    May 5, 2017
    Posts:
    160
    > Can two camera sensor added for an Agent?

    Yes, although there is no demo environment to show it. Please do let us know if it does not work for you.