Robust Video Matting ONNX import error

Hello!

I would want to learn Barracuda and Machine Learning with Unity. I am trying to import the RobustVideoMatting (GitHub - PeterL1n/RobustVideoMatting: Robust Video Matting in PyTorch, TensorFlow, TensorFlow.js, ONNX, CoreML!) neural network ONNX models provided but I have the following error:

Exception: Must have input rank for 613 in order to convert axis for NHWC op
Unity.Barracuda.Compiler.Passes.NCHWToNHWCPass.ConvertAxis (Unity.Barracuda.Layer layer, Unity.Barracuda.ModelBuilder net)
Asset import failed, "Assets/RVM/ONNX/rvm_resnet50_fp32.onnx" > Exception: Must have input rank for 613 in order to convert axis for NHWC op

And warnings:

Unsupported attribute coordinate_transformation_mode, node 399 of type Resize. Value will be ignored and defaulted to half_pixel.
Unsupported attribute nearest_mode, node 399 of type Resize. Value will be ignored and defaulted to round_prefer_floor.
Unsupported attribute ceil_mode, node 579 of type AveragePool. Value will be ignored and defaulted to 0.
Unsupported attribute ceil_mode, node 580 of type AveragePool. Value will be ignored and defaulted to 0.
Unsupported attribute ceil_mode, node 581 of type AveragePool. Value will be ignored and defaulted to 0.
Unsupported attribute coordinate_transformation_mode, node 605 of type Resize. Value will be ignored and defaulted to half_pixel.
Unsupported attribute nearest_mode, node 605 of type Resize. Value will be ignored and defaulted to round_prefer_floor.
Unsupported attribute coordinate_transformation_mode, node 641 of type Resize. Value will be ignored and defaulted to half_pixel.
Unsupported attribute nearest_mode, node 641 of type Resize. Value will be ignored and defaulted to round_prefer_floor.
Unsupported attribute coordinate_transformation_mode, node 677 of type Resize. Value will be ignored and defaulted to half_pixel.
Unsupported attribute nearest_mode, node 677 of type Resize. Value will be ignored and defaulted to round_prefer_floor.
Unsupported attribute coordinate_transformation_mode, node 713 of type Resize. Value will be ignored and defaulted to half_pixel.
Unsupported attribute nearest_mode, node 713 of type Resize. Value will be ignored and defaulted to round_prefer_floor.
Unsupported attribute coordinate_transformation_mode, node 770 of type Resize. Value will be ignored and defaulted to half_pixel.
Unsupported attribute nearest_mode, node 770 of type Resize. Value will be ignored and defaulted to round_prefer_floor.
Unsupported attribute coordinate_transformation_mode, node 784 of type Resize. Value will be ignored and defaulted to half_pixel.
Unsupported attribute nearest_mode, node 784 of type Resize. Value will be ignored and defaulted to round_prefer_floor.

Is the model supported by Barracuda (it is based on MobileNet and ResNet)? Do you have any idea of what is the issue?

Thanks!

Looking at the model, the reason why we don’t support import atm is due to the dynamic input shapes.
You can fix the input shapes and the model should be easier to import.
Let me know

Thank you for your quick answer. I am not an expert but I’ll try it, thanks!

there is easy way to use RVM on unity
https://hub.natml.ai/@natsuite/meet-segmentation

Have you solved the problem? Thanks!

No success so far. I’ve tried different video matting and human segmentation ONNX models in Barracuda (GitHub - PINTO0309/PINTO_model_zoo: A repository for storing models that have been inter-converted between various frameworks. Supported frameworks are TensorFlow, PyTorch, ONNX, OpenVINO, TFJS, TFTRT, TensorFlowLite (Float32/16/INT8), EdgeTPU, CoreML.) and is recurrent the problem with dynamic input shapes.

There is also an issue with some models:

624 Number of elements in InstanceNorm must match features from the previous layer. Was expecting 96, but got 48.

I am learning PyTorch from scratch in my spare time, so it is a slow process :')

Excuse me. Can you tell me what ‘atm’ stands for ?

At The Moment

Months passed and I am having the same type of error when importing yolov3. Actually that error shows up even in the model provided in Barracuda Starter Kit repo. Any thoughts?

Yes! I’m getting this problem with importing a certain ONNX too!!

And another one fails to import with error “ArgumentException: Cannot reshape array of size 4 into shape (n:1, h:1, w:1, c:1)”

A further onnx file failed to import with the following errors:

“OnnxImportException: Unexpected error while parsing layer onnx::Add_212 of type Gather.
Assertion failure. Value was False
Expected: True”

Are these all to do with dynamic inputs? Or just unimplemented features?

If I have downloaded an ONNX file is there a tool that lets me “fix the input shapes?” as you say, or some other type of quick fix I can do?

(Basically I would like to keep all the weights, but maybe fix some things so it more-or-less works?)

For anybody interested in using Robust Video Matting specially optimized for Unity and ready for production I’m leaving this link: Awesome ML Kit | AI-ML Integration | Unity Asset Store

@leavittx Hello, I have already sent you multiple emails. Please inform me if there is any information, otherwise, I cannot proceed with my test.I have no choice but to contact you here, please understand. Thank you again.