model inferences much slower on Barracuda compare to onnxruntime

Hi,
I have an onnx model which is converted form pytoch model.
I want to run it on barracuda unity. The problem is when I run it with onnxruntime, I almost achieved 135 fps.
However when I try it with barracuda, the performance just around 10fps.
Is is because my model is too complicate? Sorry I am very new to Barracuda.
8519720--1136105--upload_2022-10-17_12-30-0.png

These pictures are my model specifications. I just use a simple script to test my model on barracuda

Is there any reason that make my model perform so bad in barracuda?
Thank you very much.
This is the model that I am using: https://drive.google.com/file/d/1LqHnqQ0c14lAfga3Jf81te5buTYm4A-Q/view?usp=sharing

Hello, did you manage to get any solutions?

Hi,
Thank you for reporting this issue!
Can you check if the inference is still slow with the Sentis 1.3.0-pre.2 package?
The Sentis package is the new version of Barracuda, you can learn more about it here.
If your problem still exists after upgrading, please report your issue in the Sentis Discussions forum.
Thanks!