Search Unity

Question Robust Video Matting ONNX import error

Discussion in 'Barracuda' started by adcimon, Feb 18, 2022.

  1. adcimon

    adcimon

    Joined:
    Aug 8, 2012
    Posts:
    8
    Hello!

    I would want to learn Barracuda and Machine Learning with Unity. I am trying to import the RobustVideoMatting (https://github.com/PeterL1n/RobustVideoMatting) neural network ONNX models provided but I have the following error:


    Exception: Must have input rank for 613 in order to convert axis for NHWC op
    Unity.Barracuda.Compiler.Passes.NCHWToNHWCPass.ConvertAxis (Unity.Barracuda.Layer layer, Unity.Barracuda.ModelBuilder net)
    Asset import failed, "Assets/RVM/ONNX/rvm_resnet50_fp32.onnx" > Exception: Must have input rank for 613 in order to convert axis for NHWC op


    And warnings:


    Unsupported attribute coordinate_transformation_mode, node 399 of type Resize. Value will be ignored and defaulted to half_pixel.
    Unsupported attribute nearest_mode, node 399 of type Resize. Value will be ignored and defaulted to round_prefer_floor.
    Unsupported attribute ceil_mode, node 579 of type AveragePool. Value will be ignored and defaulted to 0.
    Unsupported attribute ceil_mode, node 580 of type AveragePool. Value will be ignored and defaulted to 0.
    Unsupported attribute ceil_mode, node 581 of type AveragePool. Value will be ignored and defaulted to 0.
    Unsupported attribute coordinate_transformation_mode, node 605 of type Resize. Value will be ignored and defaulted to half_pixel.
    Unsupported attribute nearest_mode, node 605 of type Resize. Value will be ignored and defaulted to round_prefer_floor.
    Unsupported attribute coordinate_transformation_mode, node 641 of type Resize. Value will be ignored and defaulted to half_pixel.
    Unsupported attribute nearest_mode, node 641 of type Resize. Value will be ignored and defaulted to round_prefer_floor.
    Unsupported attribute coordinate_transformation_mode, node 677 of type Resize. Value will be ignored and defaulted to half_pixel.
    Unsupported attribute nearest_mode, node 677 of type Resize. Value will be ignored and defaulted to round_prefer_floor.
    Unsupported attribute coordinate_transformation_mode, node 713 of type Resize. Value will be ignored and defaulted to half_pixel.
    Unsupported attribute nearest_mode, node 713 of type Resize. Value will be ignored and defaulted to round_prefer_floor.
    Unsupported attribute coordinate_transformation_mode, node 770 of type Resize. Value will be ignored and defaulted to half_pixel.
    Unsupported attribute nearest_mode, node 770 of type Resize. Value will be ignored and defaulted to round_prefer_floor.
    Unsupported attribute coordinate_transformation_mode, node 784 of type Resize. Value will be ignored and defaulted to half_pixel.
    Unsupported attribute nearest_mode, node 784 of type Resize. Value will be ignored and defaulted to round_prefer_floor.


    Is the model supported by Barracuda (it is based on MobileNet and ResNet)? Do you have any idea of what is the issue?

    Thanks!
     
  2. alexandreribard_unity

    alexandreribard_unity

    Unity Technologies

    Joined:
    Sep 18, 2019
    Posts:
    53
    Looking at the model, the reason why we don't support import atm is due to the dynamic input shapes.
    You can fix the input shapes and the model should be easier to import.
    Let me know
     
  3. adcimon

    adcimon

    Joined:
    Aug 8, 2012
    Posts:
    8
    Thank you for your quick answer. I am not an expert but I'll try it, thanks!
     
  4. DannyWoo

    DannyWoo

    Joined:
    Oct 25, 2012
    Posts:
    24
  5. wang9426

    wang9426

    Joined:
    Sep 21, 2017
    Posts:
    1
    Have you solved the problem? Thanks!
     
  6. adcimon

    adcimon

    Joined:
    Aug 8, 2012
    Posts:
    8
    No success so far. I've tried different video matting and human segmentation ONNX models in Barracuda (https://github.com/PINTO0309/PINTO_model_zoo#8-semantic-segmentation) and is recurrent the problem with dynamic input shapes.

    There is also an issue with some models:
    Code (CSharp):
    1. 624 Number of elements in InstanceNorm must match features from the previous layer. Was expecting 96, but got 48.
    I am learning PyTorch from scratch in my spare time, so it is a slow process :')
     
  7. kyuhyoung

    kyuhyoung

    Joined:
    Jul 16, 2012
    Posts:
    3
    Excuse me. Can you tell me what 'atm' stands for ?
     
  8. chris0956231181

    chris0956231181

    Joined:
    Aug 10, 2019
    Posts:
    4
    At The Moment

    Months passed and I am having the same type of error when importing yolov3. Actually that error shows up even in the model provided in Barracuda Starter Kit repo. Any thoughts?
     
  9. yoonitee

    yoonitee

    Joined:
    Jun 27, 2013
    Posts:
    2,363
    Yes! I'm getting this problem with importing a certain ONNX too!!

    And another one fails to import with error "ArgumentException: Cannot reshape array of size 4 into shape (n:1, h:1, w:1, c:1)"

    A further onnx file failed to import with the following errors:

    "OnnxImportException: Unexpected error while parsing layer onnx::Add_212 of type Gather.
    Assertion failure. Value was False
    Expected: True"

    Are these all to do with dynamic inputs? Or just unimplemented features?
    If I have downloaded an ONNX file is there a tool that lets me "fix the input shapes?" as you say, or some other type of quick fix I can do?

    (Basically I would like to keep all the weights, but maybe fix some things so it more-or-less works?)
     
    Last edited: Jan 4, 2023
  10. leavittx

    leavittx

    Joined:
    Dec 27, 2013
    Posts:
    176
    For anybody interested in using Robust Video Matting specially optimized for Unity and ready for production I'm leaving this link: https://u3d.as/32kG
     
  11. LostPanda

    LostPanda

    Joined:
    Apr 5, 2013
    Posts:
    173
    @leavittx Hello, I have already sent you multiple emails. Please inform me if there is any information, otherwise, I cannot proceed with my test.I have no choice but to contact you here, please understand. Thank you again.