Search Unity

Question What could be the reason my ML-agent stopped improving after 900 000 steps?

Discussion in 'ML-Agents' started by TheSwe, Feb 18, 2024.

  1. TheSwe

    TheSwe

    Joined:
    Nov 25, 2022
    Posts:
    1
    I have created a car game and I'm trying to use ML-agents to make it drive by itself, and the training is working. It is steadily getting farther and farther in the track until around 900 000 steps where it suddenly seems like it forgets everything. The ML-agent just returns to around the reward it was getting in the very beginning. I have no idea what could be causing this strange behavior but its not the track as I have tried different tracks with the same effect. My guess is its some kind of time-out in the ML-agent but I haven't found anything. What could it be and what can I do to solve it? Hyperparameters1.png Reluslts1.png
     
  2. firdiar

    firdiar

    Joined:
    Aug 2, 2017
    Posts:
    25
    some of advice from mine
    - Don't put beta less than 0.005 for the first-time training (it'll reduce agent exploration, i usually use constant for beta 0.005)
    - Don't equally give reward and punishment, usually i put 1 rewards and -0.01 punishment. if you need to see the graph of improvement, use StatsRecorder instead and make your custom graph.
    - Make sure to have sufficient HiddenUnit and Layer, (though high number will impact on perfromance because of overfitting)
    - Hidden Unit : affect classification count, allowing AI to solve complex problems.
    - NumLayers : affect speed of training, and accuracy.
    - use GAIL instead of behavioural cloning.
    - If you see your model unstable (goes up and down), reduce learning-rate (this should be after a long training)
    - set time_horizon, your expectation of how many steps the agent should complete the episode
    - Use EndEpisodeInterupted() , if you want to end the episode but the agent hasn't reach the goal yet