Search Unity

max_step is not working for --inference mode

Discussion in 'ML-Agents' started by Farshad22, Feb 26, 2022.

  1. Farshad22

    Farshad22

    Joined:
    Dec 21, 2021
    Posts:
    1
    Hi,
    I have trained a model using following command:

    mlagents-learn config/ppo/unit.yaml --run-id=myid --env=cs_window/Build --force


    that in config file(unit.yaml), max_step is set to 10000. So training process is stopped at step 10000 and save the .onnx file.

    So far so good!

    But now I want to inference the trained model. I have used the following command:

    mlagents-learn config/ppo/unit.yaml --run-id=myid --env=cs_window/Build --resume --inference


    Again, max_step is set to 10000 in unti.yaml config file but the inference process does not stop at step 10000.

    What is the problem? Why the max_step is not working for inference mode?

    Thanks
     
    Last edited: Feb 26, 2022
  2. weight_theta

    weight_theta

    Joined:
    Aug 23, 2020
    Posts:
    65
    When does the inference process stop ?
    Note: A clever work around to this would be to simply continue training using a learning rate of 0, as you dont need to continue training when doing inference in RL (usually).