Search Unity

  1. Welcome to the Unity Forums! Please take the time to read our Code of Conduct to familiarize yourself with the forum rules and how to post constructively.
  2. We have updated the language to the Editor Terms based on feedback from our employees and community. Learn more.
    Dismiss Notice

Using .onnx file or .pt file in real life

Discussion in 'ML-Agents' started by MrOCW, Apr 27, 2021.

  1. MrOCW

    MrOCW

    Joined:
    Feb 16, 2021
    Posts:
    51
    Hi, I have successfully trained a self driving car based on visual observations. I have located the .pt file and .onnx file and wish to use the trained model to drive a RC car in real life. Can I just load the .pt file in PyTorch and feed an image into the model and the output will be the controls for the RC car?
    May I know if there are any official methods for using the .pt/.onnx file in real life? I've read through some posts which said its not supported?

    Thanks in advance.
     
  2. celion_unity

    celion_unity

    Unity Technologies

    Joined:
    Jun 12, 2019
    Posts:
    289
    Hi,
    This should be possible, but it's not an area where we can give much guidance or help with debugging. Assuming you can run an ONNX model on your car, you'll need to feed the correct observations as inputs tensors, evaluate the model, and extract the outputs; those outputs should be the same as the ones that get passed to Agent.OnActionReceived.
     
  3. MrOCW

    MrOCW

    Joined:
    Feb 16, 2021
    Posts:
    51
    @celion_unity if my RC car has a W x H x 3 rasp pi camera and maybe a few uni direction radar sensors, I should train in mlagents as Camera Sensor + RayPerception3D AddVectorObs? And once I finish training, when i deploy the ONNX model in the onboard computer, the input tensor would be a W x H x 3 x AddVectorObs? Do you have an example of how the input tensor would be for camera sensor + other vector obs? Also, can the .pt file be used for the same purpose instead of ONNX?
     
  4. celion_unity

    celion_unity

    Unity Technologies

    Joined:
    Jun 12, 2019
    Posts:
    289
    These sort of questions are why we don't offer any support on this :)

    It sounds like it would be straightforward to match the output of the on-board camera with the format of the observations for the camera sensor.

    Matching the ray perception to the radar sensor would be a lot trickier; the format of the ray observations is very specific, and don't really have a "real world" equivalent. The raycast knows exactly what type of object it hit, and it knows where in space it is because of the hit fraction. If you want to use a radar sensor in real life, I think you'd have better luck designing an ML-Agents sensor that because like real radar (note that this is also a very hard problem).

    As for the model format: we use onnx because it's an open format and it's what Barracuda understands best. I don't know much about the .pt format, but sounds like it's mostly an internal format for pytorch. If I were you, I'd find out what the format has the best support for your hardware, and try to find something that converts onnx files to that format.
     
  5. MrOCW

    MrOCW

    Joined:
    Feb 16, 2021
    Posts:
    51
    @celion_unity i'll try it with purely visual observation without vectorobs then!
    In that case, my car only has 2 continuous actions, namely torque and and steering. So the onnx model output given a camera image of WxHx3 tensor would be a list of 2 values? I've seen another post about it being log probabilities?
     
  6. sa2706

    sa2706

    Joined:
    Sep 10, 2021
    Posts:
    2
    Hello is there any updates on this? I am trying to do the same thing but did not find any documentation about it
     
  7. jrupert-unity

    jrupert-unity

    Unity Technologies

    Joined:
    Oct 20, 2021
    Posts:
    12
    There's no off-the-shelf solution for this. The discussion above covers the issues. The main things are figuring out how to best match your simulation sensors to the physical ones, and then how to export the model for inference on your platform. The best solutions for these will depend on the hardware and probably require some iteration to get working.

    Here's an example that was posted recently:
    https://forum.unity.com/threads/post-your-ml-agents-project.1005134/#post-7634932
     
    Last edited: Dec 18, 2021