Search Unity

Question External Inference (in Python)

Discussion in 'ML-Agents' started by GalacticGlum, Aug 29, 2020.

  1. GalacticGlum

    GalacticGlum

    Joined:
    Jul 30, 2015
    Posts:
    31
    I'm trying to transfer a model trained with ML-Agents to the real-world, which requires inference in a Python process. Is this possible?
     
  2. ReinierJ

    ReinierJ

    Joined:
    Jul 10, 2020
    Posts:
    10
  3. GalacticGlum

    GalacticGlum

    Joined:
    Jul 30, 2015
    Posts:
    31
    From what I could gather, the ml-agents trainer outputs an .nn file which uses the ONNX format. So does that mean I can run inference on any tool that can load an ONNX model? In that case, what is the difference between that and "TensorFlow models"? Do you mean using the tensorflow implementation used in the ml-agents source code.
     
  4. ReinierJ

    ReinierJ

    Joined:
    Jul 10, 2020
    Posts:
    10
    Internally, ml-agents uses TensorFlow. It creates a .nn file. This .nn file is not in the ONNX format, but a custom (Barracuda) format. So you cannot use the .nn file in any application that supports ONNX. To convert your model to ONNX, you can install the tf2onnx python package. It should then automatically convert your model to ONNX and put an onnx file in the results directory alongside the .nn file. Alternatively, you could try export tools such as documented here to convert your .nn file to onnx:
    https://docs.unity3d.com/Packages/com.unity.barracuda@1.0/manual/Exporting.html