After watching many online video of Apple Park Visitor Center AR app, I cannot figure out how can they achieve such position precious 3D animation mapping to a real life 3D model. As you can see they are using Unreal Engine, I think we can achieve the same effect using Unity 3D?
How do they detect the Plane? Although there is a UI (Slowly pan across the landscape to begin initialization) guiding user to pan the iPad to a surface for initializing a Plane, it looks like the user do not need to pan the iPad and Anchors are pinned. The real life 3D model even is not a flat surface! From my ARKit development experience, I cannot achieve the same result.
How do they map 3D animation exactly the same size of the real life 3D model? I can understand that the scale size can be predicted, how about the orientation? Core Location (iBeacon/WiFi/GPS/Compass heading)? Core ML + Vision (trained object detection)?
After the iPhone X was announced, there are a few new ARKit APIs. Are they using the new ARTrackable? Or simply a ARSCNViewhitTest?
I am so interested in achieving the same result as the Apple Park Visitor Center AR app. Please share your thoughts.
The above example showing how to make use of Vision in ARKit to observe and replace rectangles. But only rectangles. Can we use Vision to find out a real life non rectangluar objects?
Everything I see in the two videos is achievable with a customized version of marker-based tracking (or model-based tracking), which has been around well before ARkit.
Thank you for your information.
I try to search more information regarding to model-based tracking AR, and I find that the Apple Park AR demo is amazingly fast and accurate. Could you suggest any framework/SDK/plug in? I really want to build something as amazing as the demo app.