Opening a new topic here for this question from another thread:
@ocularbioengarhku Sure, I’ll try to give some insight where I can. If you have any specific questions let me know.
FWIW, the model you’ve linked in this issue does import without errors for me in Sentis 2.1.1
(although there is a warning: Mode `cubic` is not supported for type Resize.
, so it may or may not work)
Model Compatibility
More generally, I usually look for .onnx files that the author of the original model (or someone else) has already converted.
Not all pytorch models can be easily converted to ONNX
- python must be able to “trace” the code that makes up the model
- Sometimes the model uses some fancy new operator that ONNX doesn’t yet support. ONNX is an interchange format and the standard constantly evolves with new operators being added as people find more uses for them in their model architectures
So I’d say most of the time when a model can be easily converted, someone has already done so.
And if the export fails due to some intricacies with the model architecture I feel it usually takes someone more skilled to fix it anyway.
Converting models to ONNX format in pytorch
If you do want to convert a suitable model yourself, it’s actually pretty simple.
A simple model and the code to export it to ONNX:
import torch
import torch.onnx
# Dummy model
class SimpleModel(torch.nn.Module):
def forward(self, x):
return x * 2
model = SimpleModel()
dummy_input = torch.randn(1, 3, 224, 224) # Example input
torch.onnx.export(model, dummy_input, "model.onnx", opset_version=15)
I often use Google Colab for converting. The free version only gives you hosted runtimes with limited RAM though, so I convert more memory-heavy models locally.
I’m not super experienced with pytorch/ML myself and I found ChatGPT incredibly useful when getting started with all this. Especially if you’re new to python and are looking through someone’s code, e.g. when you’re porting some additional pre/post-processing to Sentis (more on that below). A lot of my prompts looked like this: “Explain this python code in detail: …” and then “Port this code to C#…”
Unsupported operators in Sentis
.onnx files can then can be dropped into Unity and Sentis will import them fine a lot of the time. But as you mentioned, sometimes you’ll have a model where Sentis will give an error during import because of some unsupported operator or mode. But they constantly add new ones, so it always makes sense to try the newest version (and post the model here if it still doesn’t work.)
If an operator is not supported, theoretically you could modify the original model to remove that operator or edit the model in sentis to work around it, but whether that’s easy, depends on the case.
By the way, there’s an online viewer called Netron, that you can use to visualize & inspect the neural network graph inside .onnx files.
Using Models in Sentis
Once you have the model imported, this will often already be enough to run the model, but you still have to figure out what input(s) to give the model and what the output(s) looks like. E.g. for models that take an image as an input, you usually need to figure out if the model expects it in the [-1, 1], [0, 1], or [0, 255] range, or something else. Same for the output.
So even though, if you’ve got to this point, it’s as simple as sticking in a texture, you’ll have to learn some basics on how to do tensor processing on inputs/outputs in Sentis.
Often the most complicated part is that there’s not much documentation. So you can either inspect with Netron, look at the original python code and interrogate an LLM about it, or the trial-and-error way of just sticking some data in and seeing what you get.