Search Unity

Question How to use model ONNX trained with Rllib in Unity Editor.

Discussion in 'ML-Agents' started by piotr_kukla, Oct 26, 2021.

  1. piotr_kukla

    piotr_kukla

    Joined:
    Aug 10, 2020
    Posts:
    1
    Hi.
    According to https://medium.com/distributed-comp...h-rllib-in-the-unity-game-engine-1a98080a7c0d
    I trained a 3DBall model and exported it to ONNX format (with opset=9).
    Now I'm trying to use this model as Inference in Unity Editor.
    After importing, I get a warning that "version_number" was not found.
    When I run 3DBall scene I've got many errors like there is no tensor "version_number", no tensor "continuous_actions" etc.

    Simply when I train model with Rllib there is no Constants Tensors like "version_number" "memory_size" etc. which are required by Inference in Unity Editor.
    Is there any workaround for that?
    Should I add this Constants to the model manually somehow ?
    What values should I set? any docs?

    Thanks for help.
     
  2. gft-ai

    gft-ai

    Joined:
    Jan 12, 2021
    Posts:
    44
    I don’t have any experience with Rllib but I have had a similar experience when converting pytorch models to onnx to use it in unity. What I did was manually added those missing fields and it worked.
     
  3. mla_jasonb

    mla_jasonb

    Unity Technologies

    Joined:
    Jan 18, 2020
    Posts:
    3
    We are currently working on expanded documentation and conversion tooling for using externally trained models with our Barracuda run-time inference. Be on the lookout for updates to the main repo over the next couple weeks.
     
  4. michaelliutt

    michaelliutt

    Joined:
    May 4, 2021
    Posts:
    3
    Is there any update on this?
     
  5. vich94

    vich94

    Joined:
    Jan 4, 2022
    Posts:
    5
    I have the same issue any update?
     
  6. aciffar

    aciffar

    Joined:
    Jul 28, 2018
    Posts:
    1
    I also have the same issue, any update?
     
  7. Pentascript

    Pentascript

    Joined:
    May 19, 2017
    Posts:
    1
    Any updates please...?
     
  8. t01a

    t01a

    Joined:
    Apr 26, 2022
    Posts:
    2
    Any update...?
     
  9. strnam

    strnam

    Joined:
    Aug 4, 2021
    Posts:
    1
    @gft-ai Hi, woud you share how you can manually add the missing fields into the onnx model?
     
  10. t01a

    t01a

    Joined:
    Apr 26, 2022
    Posts:
    2
    The key issue is that rllib model and ml-agent have different inputs/outputs. For example, in the example script provided by rllib https://github.com/ray-project/ray/blob/master/rllib/examples/unity3d_env_local.py, if we train 3dball in torch, we could get a model which outputs a tensor with first half action and second half std(for random exploration), but ml-agent needs a onnx model outputs action, deterministic action and some other outputs. You should read the rllib code for more input/output info. Besides, I recommend using https://netron.app/ to view your onnx graph.
    It's possible to edit onnx model with python directly; see https://github.com/onnx/onnx/blob/main/docs/PythonAPIOverview.md and https://github.com/onnx/onnx/blob/main/docs/Operators.md. Here is my convert example for 3dball torch model trained with rllib example:
    Code (CSharp):
    1.  
    2. torchmodel = onnx.load('torchmodel.onnx') #the rllib output model dir
    3.  
    4. graph = torchmodel.graph
    5. graph.input.pop() #remove an unused input
    6. graph.input[0].name = 'obs_0' #rename input
    7. graph.node[0].input[0] = 'obs_0'
    8.  
    9. #slice the first half array as true action
    10. starts = onnx.helper.make_tensor("starts", onnx.TensorProto.INT64, [1], [0])
    11. ends = onnx.helper.make_tensor("ends", onnx.TensorProto.INT64, [1], [2])
    12. axes = onnx.helper.make_tensor("axes", onnx.TensorProto.INT64, [1], [-1])#the last dimention
    13. graph.initializer.append(starts)
    14. graph.initializer.append(ends)
    15. graph.initializer.append(axes)
    16.  
    17. #some useless output in inference
    18. version_number = onnx.helper.make_tensor("version_number", onnx.TensorProto.INT64, [1], [3])
    19. memory_size = onnx.helper.make_tensor("memory_size", onnx.TensorProto.INT64, [1], [0])
    20. continuous_actions = onnx.helper.make_tensor("continuous_actions", onnx.TensorProto.FLOAT, [2], [0,0])
    21. continuous_action_output_shape = onnx.helper.make_tensor("continuous_action_output_shape", onnx.TensorProto.INT64, [1], [2])
    22. graph.initializer.append(version_number)
    23. graph.initializer.append(memory_size)
    24. graph.initializer.append(continuous_actions)
    25. graph.initializer.append(continuous_action_output_shape)
    26.  
    27. #add the slice node
    28. node = onnx.helper.make_node(
    29.     'Slice',
    30.     inputs=['output', 'starts', 'ends','axes'],
    31.     outputs=['deterministic_continuous_actions'],
    32. )
    33. graph.node.append(node)   # add node in the last layer
    34.  
    35. #clear old output and add new output
    36. while len(graph.output):
    37.     graph.output.pop()  
    38. actions_info = onnx.helper.make_tensor_value_info("deterministic_continuous_actions", onnx.TensorProto.FLOAT, shape=[])
    39. graph.output.append(actions_info)
    40. version_number_info =onnx.helper.make_tensor_value_info("version_number", onnx.TensorProto.INT64, shape=[])
    41. graph.output.append(version_number_info)
    42. memory_size_info =onnx.helper.make_tensor_value_info("memory_size", onnx.TensorProto.INT64, shape=[])
    43. graph.output.append(memory_size_info)
    44. continuous_actions_info = onnx.helper.make_tensor_value_info("continuous_actions", onnx.TensorProto.FLOAT, shape=[])
    45. graph.output.append(continuous_actions_info)
    46. continuous_action_output_shape_info =onnx.helper.make_tensor_value_info("continuous_action_output_shape", onnx.TensorProto.INT64, shape=[])
    47. graph.output.append(continuous_action_output_shape_info)
    48.  
    49. onnx.checker.check_model(torchmodel)
    50. onnx.save(torchmodel, 'mlagentmodel.onnx') #save model dir; you can also check your model output in python with onnxruntime
    51.  
    The more elegant way is to get the torch/tf model, modify torch/tf model input/output and then save torch/tf model as onnx like ml-agent does https://github.com/Unity-Technologi...nts/trainers/model_saver/torch_model_saver.py; or bypass ml-agent and use Barracuda to execute rllib onnx model. However, I didn't find a proper way to get torch/tf model from rllib, and I have very little experience with C#... Appreciate it for anyone can help on this topic.