Search Unity

  1. Unity 6 Preview is now available. To find out what's new, have a look at our Unity 6 Preview blog post.
    Dismiss Notice
  2. Unity is excited to announce that we will be collaborating with TheXPlace for a summer game jam from June 13 - June 19. Learn more.
    Dismiss Notice

Question Details of custom training when using ml-agents-envs

Discussion in 'ML-Agents' started by Dream_Surpass, Dec 6, 2022.

  1. Dream_Surpass

    Dream_Surpass

    Joined:
    Dec 2, 2022
    Posts:
    18
    I have create my own Unity Environment and try some RL algorithms which were writtem myself to train just like tuturiols https://github.com/Unity-Technologi...op/colab/Colab_UnityEnvironment_2_Train.ipynb.

    So I am wondering whether I can use multiprocessing in python to accelerate just like "--num-envs" when use mlagents-learn?

    And env.reset() function in ml-agents-envs confuses me a little. If there were many Agents in one env to faster data collection(like "GridWorld" Env), does it mean I reset all of these agents when I call env.reset()? But in Unity Script, each agent will reset itself when it reach a terminal state. Is it neccessary to call env.reset() in python script?