Quick question with AR Foundation 5. We have a LARGE model, as in large real world size 1:1 scale, that we would like to place exactly, using an initial printed image and image tracking. However that is really all the image will be used for, is for the initial placement.
After the initial placement, does AR Foundation conitnue to track the scene, even though now our tracked image wont be visible in scene anymore?
Use case is we would use a printed image, placed on the ground, so we can place our AR building at 1:1 real world scale facing a certain direction. Then we want to be able to walk around the outside of the building, but obviously the printed image will long time not be seen in view already. So then will AR Foundation keep tracking, or how would I go about making sure the building stays grounded so to speak in our scene?
AR Foundation does not implement AR features. It takes advantage of platform provider implementations such as ARCore on Android and ARKit on iOS.
If your image is not visible to your device camera, both ARCore and ARKit will assume that the image is static and continue to try to track it. Your mileage may vary depending on how large an area you are trying to cover.
OK, but after it found my static image the 1st time and I placed my content, does the device KEEP tracking after that point, even if the image goes out of view, or at this point it uses the device’s gyro and internal sensors to keep track of where the “cam” is in 3D space as well as real world space?
So if I place my content, but then turn around 180, and turn back to orignal position, will my content be where I placed it, or it would drift since its no longer tracking?
Specifically my question is for the iPad platform not older than 2021
Platform implementations will attempt to keep tracking images even when they are out of frame. Whether they use other sensors to accomplish this is a question for each platform provider individually.
Yes I actually used this sample in my test and therefore my question. I was wondering if I’m doing something wrong since if I turn around 180 and then turn back, the cg image is off target and I have to come close to the printed image again for it to snap back into position…
Mobile tracking technologies are especially prone to drift when the tracked object is not in frame. This is not your error but a limitation of platform technologies.
Could I overcome it somewhat, by in addition to image tracking, I also add a plane tracking manager and a anchor manager and add couple of anchors, or that would not make much difference and I’ll end up with a similar situation when turning 180 and back?
To improve accuracy, you may want to monitor the pose of the image and only detach from the image and add the anchor once the pose is stable.
I have used this technique for a number of projects and it has worked well for this kind of use cases on both iOS and Android. Let us know if you have any issues though.
How do you gauge when the pose becomes “stable”? Also, would it be best to place the anchor at the position of the content, the position of the camera, or the position of some feature point / tracked plane?
In general my advice is to always keep your content close to your anchor. The further away your content is from the anchor, the more you will experience floating-point rounding errors in the content’s position. Your anchor should also be placed in a well-tracked position such as on a plane or near feature points.
Hi @Stents ,
I am having similar trouble , Can you Guide me through how you achieved accuracy I have tried this code and my prefab is always spawning at a weird angle Can you help me out
There may be a better method of adding an anchor, depending on the version you are using, look for TryAddAnchor or AddAnchor on either the anchor manager or the anchor subsystem. Do you know which version of AR Foundation you are using?