ARFoundation & Barracuda - Async inference makes AR lags

Hello,

We’ve been experiencing issues using ARFoundation and Barracuda on some ONNX’s.

The current pipeline is getting an image from ARFoundation, then asynchronously send it to barracuda’s worker, wait for the output, process the output and then tell we are ready for the next frame.

While on some very high-end devices this workflow works great in AR (iphone 13 pro max, ipad pro M1) , on some high-end devices (ipad pro 2nd generation, iphone 12 pro max) it produces jittery on the AR camera stream (making AR difficult to use).

To test this we’ve taken the ARFoundation samples project (on 4.2.8 release) and put Barracuda and a sample setup similar to what we’re doing. Here is the github : GitHub - Pourfex/barracuda-arfoundation-performance-analysis: Using Barracuda in ARFoundation samples to analyse performance issues
To test it please go be on branch “4.2”, unity scene “CpuImages”.

In the profiler we are seeing spike when the AR is jittering with some Blit usage.

Wonder if there is any way to eliminate this jitter (tried to change the barracuda’s worker instantiation, having smaller input/output for onnx does help but problem is still here - spikes present in profiler).

Best regards

I am experiencing exactly the same thing. iPhone 13 Pro it is smooth as butter, on iPhone 12 Pro there are clear slowdowns at each frame the model is running. I wouldn’t think that one generation of the device would make that much of a difference.

I am not using AR Foundation directly, but rather Niantic’s Lightship (which uses ARKit underneath). Still the same problem though.

Never found the issue though. Hoping that Sentis would solve that eventually.

Hey guys, I would like to invite you the new version of Barracuda, which is called Sentis, and is currently in a private beta. If you can DM me your email addresses, and please signup with the same emails on Unity, then I am happy to invite you!

In the meantime, here is the documentation