Evaluating of Unity for Vision Pro development

We already have an AR app based on Unity and are exploring whether it’s worth to subscribe to Pro just to get access to the Vision Pro tools or if we should go for native instead. Thus, I wanted to check back if the following features are supported (or at least planned for the near future):

  • Access to geolocation information via GPS. I think the devices doesn’t have a GPS chip itself, but it supports linking the device to an iPhone). If this is possible, can this also be used in the simulator?
  • Access to the camera and microphone to record photos and videos using the glasses. If supported, can this be tested in the simulator?
  • Access to an embedded web view that can be open a web view, e.g. in a separate window on top of the immersive view, with communication between that web view and the app.
  • Full access to ARKit’s world data, i.e. without the limitations of Vision OS’ “shared space”.
  • Are all features of AR Foundation on iOS supported, or are there significant differences? Is there any documentation available that covers AR Foundation / AR Kit on Vision OS? Can I expect that an AR app built for AR Foundation iOS “just works”, or does it have to be adapted significantly?
  • Support for Unity’s own UI toolkits, to layer UI on top of the immersive view.
  • Support for custom (native) plugins. We developed a few for iOS already and want to port them to Vision OS. Two of these plugins use background services, like uploading files to a server when the app has been put in the background.
  • Support for (server-side) push notifications, at least via a native plugin
  • Support for Google’s External Dependency Manager for Unity, to install CocoaPods during installation time.
  • Support for Unity packages from the Asset Store, even if they use native code that was developed for iOS (given that the same APIs still exist in Vision OS).

Any insights in these features (or part of them) is appreciated. Thanks.