I am trying to implement object detection using Barracuda and a yolov7-tiny_256x320 model for an AR application in Magic Leap 2. I get the raw frames from video capturing as shown in the ML Dev Forum but then when proceeding to the detection part I get errors. First of all the .onnx seems to have warnings (unsupported attribute nearest_mode …) and then I am not sure how to continue after having the video in a texture format.
Hi salexiou,
I recommend that you upgrade to Sentis 2.1. There has been a lot of development since Sentis was named Barracuda.
There are some samples that may help you, for instance BlazeDetectionSample.
Regards,
Viviane
Sure, but I think barracuda can make things work as well. I don’t know how to exactly handle the tensor output though (how to implement the DebugYOLOOutput).
ObjectDetection.cs (6.2 KB)