So I’m currently trying to build an AR app for a client, and he wants to be able to take a picture with his phone and generate a static AR snapshot (e.g. generate colliders and meshes, etc) that he can bounce objects off of.
Most of the built-in AR stuff is continuous, and I’m wondering how one would go about accessing what you need to generate what you need from a still image and then be done with it?
Also, ultimately we’re going to be using the iPhone 12 Pro and its built-in lidar capabilities to facilitate this, but right now I’m creating a prototype with a cheap android phone, so I’m aware there will be some limitations with using the depth API. Anyway I can get some kind of stock data (like an existing photo with lidar information) to pass in to mock this up?