Search Unity

  1. Megacity Metro Demo now available. Download now.
    Dismiss Notice
  2. Unity support for visionOS is now available. Learn more in our blog post.
    Dismiss Notice

Question Inference differs from training

Discussion in 'ML-Agents' started by Sajmon1337, Mar 9, 2023.

  1. Sajmon1337

    Sajmon1337

    Joined:
    Dec 9, 2021
    Posts:
    3
    Cheers!

    What could be the source of error when the agent behavior differs significantly during inference from what can be observed while training? If I take the generated .onnx-brain and hand it to the Behavior-component when in the editor and then run the simulation in the editor or bake the brain into a build (mono or IL2CPP) the behavior is completely off, BUT when i run inference from mlagents-learn via the --inference flag the behavior is as expected both in a build and when in editor play mode.
    I have no clue even where to start digging into this.

    using:

    ml-agents: 0.30.0,
    ml-agents-envs: 0.30.0,
    Communicator API: 1.5.0,
    PyTorch: 1.13.1+cu117

    on Ubuntu 2022 LTS

    with lots of Burstcompiled jobs in the sensor logic.