I am trying to run a simple classification model via unity sentis. The model runs fine, but I get different result from when I run the model natively in pytorch (or via the onnx runtime in python).
I have no idea how I can debug this problem.
i am using sentis package 1.3.0-pre.3
1 Like
How do you pass the input to the models in Sentis? If we are talking about image classification: with image inputs, I find that oftentimes it’s just a mismatch of what values the model expects, e.g. [0-255] vs [-1.0, 1.0] vs [0.0, 1.0]
Either way, when I get unexpected outputs I usually start by checking if the inputs are actually the same by dumping the input tensor data to a text file in both python and Unity.
2 Likes
Hi yes the inputs are all in range 0.0…1.0 the shapes match.
i dumped the inputs to images in unity (and imported them back into python) just to be on the safe side. I inspected the values manually (min, max, avg) and they seemed to match.
Since I don’t have other ideas atm I will try to dump the float array to a file so I can recheck it in python with binary identical data.
1 Like
If you’ve validated that the inputs are the same and you’re also processing the results the same way in Unity, it might be bug. (Something similar was reported for another model with 1.3.0-pre.3 here)
You’re best bet is probably to post the model here if you can, so people can reproduce it in case it’s a bug.
1 Like