Search Unity

Question WARNING Your environment contains multiple teams, but PPOTrainer dosent support adversarial games

Discussion in 'ML-Agents' started by unity_6gCk04bKBvTBXw, Mar 17, 2021.

  1. unity_6gCk04bKBvTBXw

    unity_6gCk04bKBvTBXw

    Joined:
    Aug 21, 2020
    Posts:
    5
    Hi everyone , When trying to train my agent as a self play I get the following error at the prompt, how can I solve this ? thanks in advance :)

    WARNING [trainer.py:240] Your environment contains multiple
    teams, but PPOTrainer doesn't support adversarial games. Enable self-play to
    train adversarial games.
     
  2. christophergoy

    christophergoy

    Joined:
    Sep 16, 2015
    Posts:
    735
    Hi,
    Does your trainer configuration behavior name match your agent’s behavior name? I’ve run into this myself and have seen that error. Please look through the python console logs again. There should be another warning.
     
  3. unity_6gCk04bKBvTBXw

    unity_6gCk04bKBvTBXw

    Joined:
    Aug 21, 2020
    Posts:
    5
    Yes , it is match and other error message

    2021-03-17 09:20:20 WARNING [trainer.py:240] Your environment contains multiple
    teams, but PPOTrainer doesn't support adversarial games. Enable self-play to
    train adversarial games.
    2021-03-17 09:20:39 INFO [subprocess_env_manager.py:220] UnityEnvironment worker
    0: environment stopping.
    2021-03-17 09:20:39 INFO [trainer_controller.py:187] Learning was interrupted. P
    lease wait while the graph is generated.
    2021-03-17 09:20:39 WARNING [rl_trainer.py:174] Trainer has multiple policies, b
    ut default behavior only saves the first.
    2021-03-17 09:20:39 WARNING [rl_trainer.py:152] Trainer has multiple policies, b
    ut default behavior only saves the first.
    2021-03-17 09:20:39 INFO [model_serialization.py:130] Converting to results\test
    17\Agent Controller\Agent Controller-974.onnx
    2021-03-17 09:20:39 INFO [model_serialization.py:142] Exported results\test17\Ag
    ent Controller\Agent Controller-974.onnx
    2021-03-17 09:20:39 INFO [torch_model_saver.py:116] Copied results\test17\Agent
    Controller\Agent Controller-974.onnx to results\test17\Agent Controller.onnx.
    2021-03-17 09:20:39 INFO [trainer_controller.py:81] Saved Model
     
  4. mamaorha

    mamaorha

    Joined:
    Jun 16, 2015
    Posts:
    44
    I had similar problem, adding self play attributes under yhe yaml helped.

    Code (CSharp):
    1. self_play:
    2.       save_steps: 50000
    3.       team_change: 100000
    4.       swap_steps: 2000
    5.       window: 10
    6.       play_against_latest_model_ratio: 0.5
    7.       initial_elo: 1200.0
     
    unity_6gCk04bKBvTBXw likes this.
  5. christophergoy

    christophergoy

    Joined:
    Sep 16, 2015
    Posts:
    735
    Can you post your training configuration, your agent behavior names, and team ids?
     
  6. unity_6gCk04bKBvTBXw

    unity_6gCk04bKBvTBXw

    Joined:
    Aug 21, 2020
    Posts:
    5
    After making a few changes, I started getting the behavior name does not match error, is there a special place where I should throw the trainer.yaml file? and how can I activate self play
     

    Attached Files:

  7. mamaorha

    mamaorha

    Joined:
    Jun 16, 2015
    Posts:
    44
    the identation is wrong it should be something like
    https://pastebin.com/kQJj6vG2

    tho im not sure about the "Space" you have there, i suggest updating to AgentController

    NOTE - im just a random guy trying to help, not a unity staff member or anything
     
    JeromePoenisch likes this.
  8. unity_6gCk04bKBvTBXw

    unity_6gCk04bKBvTBXw

    Joined:
    Aug 21, 2020
    Posts:
    5
    Sorry i dont understand link that you send dont open
     
  9. mamaorha

    mamaorha

    Joined:
    Jun 16, 2015
    Posts:
    44
    thats odd, i will try posting it here
     

    Attached Files: