Taking over 8 minutes and counting to import a 1.6GB onnx model

I am trying to put a 1.6GB neural network into the asset folder. First time it took 15 minutes then ran out of hard disk space. Second time it took 8 minutes and used 20GB hard disk space (page file?)

Is there a work around? Can I load the onnx as an external file instead?
When I use Onnx Runtime, I just give it the path of the onnx and it loads in the onnx in a few seconds.

We’ve noticed some inefficiencies for larger models.
I’d suggest to file a bug report to increase visibility so we can fix it.

OK will do.
Perhaps it was because it was a float16 model. And doing a billion float16 to float32 conversions is very slow? That is just a guess but it is a problem I have had before. The unity function Mathf.FloatToHalf.html is not very fast.
I could try it again with a float32 model.

It’s a combination of a few things.
We’ll profile to see what are the bottlenecks and improve the API.

1 Like

Hi there, is the model you are trying to import public? If so could you post a link to it for us to profile?

Thanks a lot.

Sure, I have put it in my Dropbox and I will send you the link to the onnx file in a message.
(It was this unet model outputted as a float16 onnx file.)

I believe there may be an issue with memory leakage in general when importing onnx files, but this one in particular seemed to take a particularly long time.

(BTW my system has 12GB RAM)

Hey all, just wanted to let you know that we are working on a fix for this. Stay tuned.

2 Likes

Hi everyone, just a heads up we shipped an update to the package today to help reduce the RAM consumption when loading a model. Let us know if you see an improvement.