Search Unity

  1. Welcome to the Unity Forums! Please take the time to read our Code of Conduct to familiarize yourself with the forum rules and how to post constructively.
  2. Join us on March 30, 2023, between 5 am & 1 pm EST, in the Performance Profiling Dev Blitz Day 2023 - Q&A forum and Discord where you can connect with our teams behind the Memory and CPU Profilers.
    Dismiss Notice

Question Details of custom training when using ml-agents-envs

Discussion in 'ML-Agents' started by Dream_Surpass, Dec 6, 2022.

  1. Dream_Surpass

    Dream_Surpass

    Joined:
    Dec 2, 2022
    Posts:
    9
    I have create my own Unity Environment and try some RL algorithms which were writtem myself to train just like tuturiols https://github.com/Unity-Technologi...op/colab/Colab_UnityEnvironment_2_Train.ipynb.

    So I am wondering whether I can use multiprocessing in python to accelerate just like "--num-envs" when use mlagents-learn?

    And env.reset() function in ml-agents-envs confuses me a little. If there were many Agents in one env to faster data collection(like "GridWorld" Env), does it mean I reset all of these agents when I call env.reset()? But in Unity Script, each agent will reset itself when it reach a terminal state. Is it neccessary to call env.reset() in python script?