Search Unity

Issue with env.set_actions

Discussion in 'ML-Agents' started by TheGreatBritishNinja, Apr 2, 2020.

  1. TheGreatBritishNinja

    TheGreatBritishNinja

    Joined:
    May 20, 2019
    Posts:
    3
    I'm trying to use a custom q-learning algorithm to train my agents, using advice from the getting-started notebook. However, I'm running into an issue with env.set_actions(agent_group, action): at a seemingly random point in training, the expected size of the action array randomly goes up from 1 to 2, and I'll receive this error: "The group LearnerBrain?team=0 needs an input of dimension (2, 6) but received input of dimension (1, 6)." I'm using step_result.n_agents() to determine the size of the action array, so I know it should be accurate, but for some reason it always throws up this error. This happens every time I try and train the agent. Any advice?
     
  2. TreyK-47

    TreyK-47

    Unity Technologies

    Joined:
    Oct 22, 2019
    Posts:
    1,820
    I'll pass this to the team to have a look. Which version of C# and Python are you using?
     
  3. paypaytr

    paypaytr

    Joined:
    Jan 30, 2018
    Posts:
    5
    The group Basic?team=0 needs an input of dimension (1, 1) but received input of dimension ()

    A similiar issue with me on Basic environment
     
  4. andrewcoh_unity

    andrewcoh_unity

    Unity Technologies

    Joined:
    Sep 5, 2019
    Posts:
    162
    This is caused by how we handle agents reaching a terminal state. When a trajectory reaches a 'done' configuration, the starting state for the next trajectory is sent along with the terminal state of the old trajectory. When using n_agents, do you only ever see 1 agent?