Search Unity

  1. Welcome to the Unity Forums! Please take the time to read our Code of Conduct to familiarize yourself with the forum rules and how to post constructively.

Inference Differences

Discussion in 'ML-Agents' started by unity_D79EF882EBFC1497854E, Jul 12, 2022.

  1. unity_D79EF882EBFC1497854E


    Apr 6, 2022
    I have a very simple PPO that I've copied from the 3DBall example. Didn't change many parameters.
    When I train, the agent learns great. Drives the car. Learns to go faster. Perfect for what I'm trying to learn.

    However, when I connect the model to the Agent in the editor, and hit play. The inference is terrible. Totally different behavior from training. Agent runs off the track. Starts super slow and doesn't speed up as during training.

    One clue is that when I artificially lower the frame rate (by running other applications) it seems to do better. Based on this I tried changing the "Decision Period" in the "Decision Requester". But this doesn't seem to fix the issue. (also note: 3DBall works fine both train and inference)

    Does anyone have an idea why the inference behavior is so vastly different from training behavior?

    I installed latest from the repo, and latest pytorch.
    Platform: MacOS m1
    Unity 2021.2.16f1 [AppleSilicon]
    python 3.9.12
    pytorch 1.12.0
  2. RMB1100


    Jan 8, 2021
    I have the same problem. When using PPO with a network of size 2x256 the behaviour in inference was equivalent to the best performance in training. When I change to a 3x512 network (as part of a wider attempt to optimise learnt behaviour) I see the problem as above - good behaviour in training, poor in inference. It may be relevant that I'm running both training an inference with time-scale=3. Reducing timescale to 0.1 during inference does improve the behaviour a little, but does not get close to recovering the full in-training performance (and is not a practical solution in any case). I will try with an intermediate-sized network.

    Any thoughts/advice welcome.

    Version information (also running on AppleSilicon):
    ml-agents: 0.27.0,
    ml-agents-envs: 0.27.0,
    Communicator API: 1.5.0,
    PyTorch: 1.8.1
  3. WaxyMcRivers


    May 9, 2016
    Make sure all control/observational logic is inside FixedUpdateand not Update. There are lots of posts on this forum about people running into inference difference issues and it's almost always because there's logic in Update and not FixedUpdate.