From what I understood, this option is still in Beta.
I read here that this functionality is disabled by default, and enabled if the tf2onnx package is installed.
I installed tf2onnx , and downgraded tensorflow to 1.15.
However, my models are still automatically exported as .nn .
Did I miss something? Or should I change other settings as well?
Hi!
So far so good for normal models.
However, I’m especially interested in using ONNX for including operations that weren’t supported with barracuda. (conv3d layer for example).
The current flow of the script first converts the model to .nn
Only after that it converts the model to .onnx.
In my case, this makes the script crash at .nn conversion (due to unsupported operation) before the .onnx conversion could take place. I guess that’s a bugg if you hade the same functionality in mind.
Could there be a way of omitting the barracuda / -->.nn script entirely, and saving to .onnx right away?
Great! Not sure what barracuda runtime is exactly responsible for in the code, but I’m able to train a model with conv3D, so inference should also work I guess
By Barracuda runtime, I meant the code in Unity that actually performs inference. I checked with the Barracuda team, and they confirmed that Conv3D is not currently supported. So even if you’re able to export to ONNX, the import into the engine will likely fail.
Aah I understand. My use-case is a research project on 3D sculpting.
An environment starts with a 3D cube in the middle. The agent moves around and deletes parts of it to make it into a specific target.
Currently I’m using 2d convolutions on slices of the environment (x plane, y plane, and z-plane around the agent). RaySensors don’t provide enough information to the agent to make good choices.
Also tried stacking ~10 different x-planes, and results are kind of okay, but for proper 3D understanding in such an environment I think an agent needs to use 3D convolutions.
I could see this feature useful as well for many other applications, like 3d object recognition, or lidar interpretation in simulated autonomous driving.
Sorry to resurrect a old thread - but I found this thread while also looking for support for Conv3D. My use case is essentially detecting specific 3D gestures in VR. Think: literal hand-waving for casting spells; wizard style.
I’ve got a pretty good model developed in Tensorflow 2 that I currently cant find a way import. I’m pretty excited about the potential of it all but feel very dead in the water right now.
Do I have any options outside of Barracuda? I found reference to an issue named MLA-1220 but cant find it anywhere on the issue tracker.
MLA-1220 refers to our internal jira, it’s not visible on the public tracker.
Since the original discussion, there’s been some work on the prerequisites for doing this in Barracuda (1.1.0 and later can handle up to 8D tensors) but I think support for the Conv3D operator hasn’t been done yet. I don’t have any information on a timeline, but I believe it’s a high priority for the barracuda team.
Hey thanks for the quick reply! That’s some great news. In the mean time I’ve been fighting with frameworks like Tensorflow.net, ML.NET, etc trying to get my model to run in Unity with no luck. I think you guys might be breaking new ground with this one.
I might end up shelving my project until there are new developments… is there a way I can easily stay up to date with new features in Barracuda?