After watching many online video of Apple Park Visitor Center AR app, I cannot figure out how can they achieve such position precious 3D animation mapping to a real life 3D model. As you can see they are using Unreal Engine, I think we can achieve the same effect using Unity 3D? How do they detect the Plane? Although there is a UI (Slowly pan across the landscape to begin initialization) guiding user to pan the iPad to a surface for initializing a Plane, it looks like the user do not need to pan the iPad and Anchors are pinned. The real life 3D model even is not a flat surface! From my ARKit development experience, I cannot achieve the same result. How do they map 3D animation exactly the same size of the real life 3D model? I can understand that the scale size can be predicted, how about the orientation? Core Location (iBeacon/WiFi/GPS/Compass heading)? Core ML + Vision (trained object detection)? After the iPhone X was announced, there are a few new ARKit APIs. Are they using the new ARTrackable? Or simply a ARSCNViewhitTest? I am so interested in achieving the same result as the Apple Park Visitor Center AR app. Please share your thoughts.