ONNXModelConverter is not available at runtime with Unity Sentis 1.5.0-pre.2

ONNXModelConverter is not available at runtime with Unity Sentis 1.5.0-pre.2.
Unity.Sentis.ONNX will be removed, It looks like that has been moved to the editor extension from Unity Sentis 1.5.0-pre.2.
We still need this feature for our projects. Is there another way to load ONNX at runtime?
Thanks,

You can load models at runtime by serializing them to Sentis format. This is the recommended way to load models at runtime from a path.

Can you describe your use case for loading specifically onnx files at runtime?

We provide an application that selects and reads any ONNX file at runtime that users added to the device storage themselves. In this use case, ONNX files are generated by users themselves. It is not bundled with the our application.
The end-user is not using the Unity Editor. They have no way to convert ONNX (.onnx) to Sentis (.sentis). Should I have them install the Unity Editor on their PC just for that purpose? :frowning:

Until Unity Sentis 1.4.0, the ONNXConverter class was available this use case. Is there any specific rational reason why that should not be available in Unity Sentis 1.5.0?

Please consider providing an ONNXConverter class again. Thanks,

3 Likes

We removed to to clean up the assemblies and not ship protobuff and onnx files at runtime.
Since it’s usually a import tool we decided to move it only in the editor asmdef to have a more streamlined runtime build.

1 Like

Are you keeping the architecture the same? ie finetuning the weights only?

I heard you guys are working on a feature to export directly from PyTorch to Sentis. It will fulfill requests in my usecase. I am relieved that we will be able to continue to provide similar applications in the future. Thanks,
https://portal.productboard.com/prhsasojp2xzn5sxdbybdmnu/c/2638-pytorch-direct-export?&utm_medium=social&utm_source=starter_share


On the other hand, I think ONNX is a good format that has achieved de facto standards. It would seem tempting to continue to support it. I think that feature is one of the good things about Scentis.
For example, how about offering it as a separate package about ONNX features? For those who need these features, it would be available as before. For those who do not need these features, a more streamlined run-time build would be possible.
(This is my personal opinion just for your reference.)

3 Likes

by dropping ONNXModelConverter runtime support I believe a lot of very interesting and powerful use-cases are excluded. I suppose the .sentis serialized format will be tightly coupled to the Sentis version as well?

1 Like

Layer names do not seem to be retained either by the ModelAsset?

Any news on this? I also need to load .onnx models at runtime.
Or be able to convert .onnx to a serialized modelAssets using python.