Search Unity

Feature Request full onnx support

Discussion in 'Barracuda' started by olmewe, Nov 14, 2021.

  1. olmewe

    olmewe

    Joined:
    Mar 6, 2015
    Posts:
    4
    hi! this is not only a request but also a question, in a way.

    i'm building an app that requires body tracking, and i've been experimenting with barracuda for almost a year now to make that work. i'm not experienced with the neural network toolsets available nowadays, so i'm often wandering around pinto model zoo, which is full of well known NN models converted to multiple file formats, as well as onnx.

    the thing is that only a handful of the models in there actually work with barracuda; i often resort to people who manually convert models to work with barracuda and publish them online, which leaves me with a more limited set of resources to work with, as not everyone is doing the exact same thing as me.

    so yeah! as someone who has no idea how to work with the internals of an onnx file, it'd be nice if i could just plug any onnx in unity and have it working.

    at the same time, knowing that you guys took the approach of slowly implementing node support as people request them, i can imagine there's something going on that makes node implementation not trivial, and i feel bad for just straight up asking for full onnx support like that hahah which also makes me curious! what are the challenges for this?

    dare i also ask, whom exactly is barracuda made for? sometimes i feel like i'm in the wrong for expecting all this from barracuda, and that i'm actually the one responsible for converting these models. but as someone who doesn't have the time and energy to learn how every node type works or how to work with python notebooks, i genuinely wish for an inference engine that just lets me treat models as black boxes, and i think i expect that from a package that's on version 2.3.1 already. i might be wrong though! am i missing a roadmap or anything?

    (also yes i'm aware of the (very few) alternatives, but i like barracuda's API best, i think!)

    but yeah i think that's all. sorry if that's all too much, and thank you in advance :]
     
    yoonitee likes this.
  2. fguinier

    fguinier

    Unity Technologies

    Joined:
    Sep 14, 2015
    Posts:
    146
    Hi Olmewe,

    First! Thanks for your feedback and being a supporter of our product! I will try to answer all the point/question in order :)
    > it'd be nice if i could just plug any onnx in unity and have it working.
    Indeed that would be nice to experiment quickly with any model! Something to keep in mind however is NNs can be vastly different in term of runtime requirements, one would still need to carefully select and/or design a given NN in accordance to project runtime and hardware requirements. This is actually one of the reason we don't support all ONNX operators, some operators (conditional or looping control for example) can be problematic in term of performance especially on constrained hardware. So yes we would like to have full support for any given NN for the operators we support from ONNX, and we expand this list of operators with time, however supporting the full list of ONNX operators (ONNX being a moving standard) is not the goal by itself. ONNX is rather the medium/bridge we use to bring easy deployment on NNs in the context of a realtime 3D application (ie Unity).

    > slowly implementing node support as people request them
    Indeed we put a lot of care into implementing operators carefully, some operators are quite easy and fast to do. Some we are continuously improving upon! Performance critical operators actually all have many implementations (to optimize based on node inputs and parameters and platform hardware). As stated above our mission is to enable easy deployment in NN in the context of Unity/realtime 3D and we need to balance our effort between performance and memory (on all supported platform), and range of NNs supported. The best way we have found so far is to select our priorities based on customer feedback both internally and externally. I'm hoping it make sense?

    > what are the challenges for this?
    To be sure i answer the question: Some operators can be very complex to implement in the most performant way, especially if you go into the specific of hardware (various CPU, GPU, APIs and memory characteristics of target platforms).

    > dare i also ask, whom exactly is barracuda made for?
    Thanks for asking :) We have various different client and thus use cases. There is probably no simple answers to this but let's try:
    * ML Researcher: Expert in MM/DL and python, probably not fluent in C# or in Unity.
    --> This profile is probably looking into easy C# API that just work on a few platforms. They might need to extend framework to be able to try new ideas (those ideas might not be exposed via ONNX). Performance and memory is probably important but not crucial, platform reach is also not crucial as long as key platforms are there.
    * Unity Dev: Expert in C# and Unity, might not be fluent in ML/DL/python.
    --> We still want an easy C# API, however it must have fine grained enough control for the product to reach production quality. Platform coverage and stability is key for the project to ship. Performance and memory are often enabler and are very important too. The NNs themselves are usually coming from the internet from well known and proven papers and associated repositories. Simple import of common models along performance and quality/stability is key here to enable the final product.
    * In between those two profile, you get applied Research or Unity Dev and hobbyists with ML knowledge
    --> Depending on the project they might have the need of flexibility, stability, performance or platform reach with various importance. Something in between the two above thus :)
    Does it make sense? I would love your feedback!

    > the one responsible for converting these models
    I would say it depends, famous model witch make senses in the context of realtime 3D should work out of the box indeed. However exporting model yourselve is definitively something useful that should be looked at if your project/team have the time to do so. It could allows to improve runtime performance sometime by a huge margin, as well as fine tuning the model to your specific case for extra quality.

    Hope this all make sense!
    Let us know :)

    Florent
     
    olmewe likes this.
  3. olmewe

    olmewe

    Joined:
    Mar 6, 2015
    Posts:
    4
    makes sense, yeah :] thanks so much for the reply!

    i've tried to get into the internals of ML multiple times this year, but i'll admit that trying to learn how to use these tools has been pretty frustrating -- the tools themselves are pretty inaccessible, configuring environments is far from trivial, and they seem to assume i know exactly what every available operator does, while all the documentation i can find is scarce and definitely not beginner friendly.

    since i'm the only one working on this project and there are lots of other things to take care of, unless someone can point me to the right direction, i really can't think of any feasible way to do all this hahah

    do i have any other ways out, as a "unity dev who isn't fluent in ML"? :] like, are there any plans to have things such as in-editor tools for converting & optimising models or anything?

    otherwise i'll have to stick to my original plan -- but since you said it's expected that barracuda works with models that make sense in the context of realtime 3d applications, i assume i can create an issue on github whenever one of those doesn't work for me, right?
     
  4. fguinier

    fguinier

    Unity Technologies

    Joined:
    Sep 14, 2015
    Posts:
    146
    > Are there any plans to have things such as in-editor tools for converting & optimising models or anything?
    We have done internal research to offer training in editor. However, it seems to me that this won't solve at least not completely. Training your own model will likely require ML know how and time. Thus importing pretrained model from the internet will still be very valuable, and we are back to square one :)

    > i assume i can create an issue on github whenever one of those doesn't work for me, right?
    Yes, Exactly :) Please do!
     
    olmewe likes this.
  5. amirebrahimi_unity

    amirebrahimi_unity

    Joined:
    Aug 12, 2015
    Posts:
    400
  6. olmewe

    olmewe

    Joined:
    Mar 6, 2015
    Posts:
    4
    i'm looking into it right now, but it looks like that repo has more to do with the barracuda API than the internals of onnx files, right? barracuda is pretty well documented, i already know how to work with workers and tensors and stuff; what i feel that lacks documentation is how to handle onnx files and how to convert them to make them compatible with barracuda, which isn't covered by that starter kit, as it skips that step and already includes compatible models.
     
  7. fguinier

    fguinier

    Unity Technologies

    Joined:
    Sep 14, 2015
    Posts:
    146
    @olmewe. Indeed. This repo goal is to showcase the API in practical use cases. It uses popular models and for example yolov3 was added just a few days ago!

    To go back to your point documenting how to convert from various library to ONNX is documented for the simple case. However when it fail we don't have much indeed atm, problem is that the cause can be multiple and complex, some examples:
    - File is using an operator we do not support
    - File is using a parameter on an operator we do not support
    - File is using an input on an operator we require as constant but it is not
    - We simply have a bug
    - A combination of multiple items from above (sometime cascading)

    For the 3 first case i think we should have better error reporting from the importer, error logging at the moment can be criptic (We have a task in the backlog in regards to this). Hopefully with easyer to understand error, it will be easier to diagnose the problem and reexport the model while avoiding the issue(s). This will definitively require some ML knowhow and in many case tempering with the model itself in the source ML library (pytorch etc). We could add a FAQ of the most common problems, however I don't think we should cover the full spectrum of those NN change in the Barracuda doc (it is the role of the doc of those libraries imho).
    For the last case (bug) well better error reporting would help too however it is likely that the solution would be to open a bug on github for us to fix anyway. The most important would be to identify easily that the problem is unexpected i guess then and direct to github.
    Finally well we should have a broader support of models (we are also working on this) :)

    Opinion? :)
     
    Last edited: Dec 10, 2021
  8. olmewe

    olmewe

    Joined:
    Mar 6, 2015
    Posts:
    4
    i forgot about this thread oops

    no, yeah, it makes sense. i think having a FAQ and better error reporting to be able to figure out what the issue is exactly would help a lot already, at least something that can point people to the right direction given how most ML documentation fails so badly on that aspect
     
  9. yoonitee

    yoonitee

    Joined:
    Jun 27, 2013
    Posts:
    2,363
    I've been playing around with Baraccuda and things on the onnx zoo. It only seems to work with about 10%-20% of the onnx files on there. At the moment it is interesting but it is a shame the newer exciting AI models just don't work with it at the moment. (Not that my laptop would be able to run them anyway. But if they did work I'd run them on a cloud gaming PC).
     
    Last edited: Jan 4, 2023