TLDR: When using LiDAR iPhone or iPad, is the sensor input from LiDAR factored in when generating feature points? Which „Trackable“ in ARFoundation represents the LiDAR Mesh? Is it TrackableType.Depth? Is it TrackableType.Face? (Home Page. | AR Subsystems | 4.2.10
Asked differently: ARKit and ARCore use some form of feature tracking algorithm, for that they mostly use visual clues but also probably weigh in other sensors like gyro. In that sense, LiDAR is just one more sensor in the array of available sensors that might be factored in for the slam algorithm.
However, when I build a simple ARFoundation FeaturePointCloud Visualizer Scene, iPhone XS and iPad Pro (with LiDAR) generate the exact same feature points. That’s indicating to me, that the feature point detection algorithm does not factor in LiDAR data at all.
Is my observation correct? If so, why would ARFoundation not integration LiDAR data? Is it not yet implemented due to differences between ARCore and ARKit, or does even ARKit not use LiDAR Data for feature tracking?
A usecase for factoring in LiDAR data is for example when the device looks at a white wall or similiar homogenous area. Regular visual feature tracking algorithms are unable to detect any features in that scenario. However LiDAR devices might generate featurepoints even there. I don’t see why ARFoundation (or ARKit) wouldn’t use the sensors, if they are available.
Also basically everything in ARFoundation, that is located and tracked in 3d Space is represented by a „Trackable“ (eg. Feature points, Planes, Images, Faces …). Which „Trackable“ in ARFoundation represents the LiDAR Mesh? Is it TrackableType.Depth? Is it TrackableType.Face? (Enum TrackableType | AR Subsystems | 4.2.10)