An issue that I’m not sure how to approach, is that when a ground plane is found, and we spawn an object on it, if we turn the camera away from the ground plane or up towards the ceiling, it seems that the computer vision algorithms freak out and start sending the ground plane and objects attached to it, way off into the horizon.
Is there a way to make this issue less apparent? Any tips / ideas for avoiding this?
Is this what ARReferencePoints (anchors) are for? =P
Yeah, that’s the limitation of the tech right now I think. Reference Points do not solve this problem.
I expect the stability of the tracking to improve as devices become more powerful, but in the meantime, both Apple and Google have published guidelines for how to direct users to interact with your app.