Scenario: we want to create a 3d visualization that also requires text and other ui screens, e.g. an info panel, a tooltip pointing to a specific location, a an expandable list with meta data and more.
Concept question: Unity offers UI components and Vision OS does so as well. Which should be the leading UI system to ensure it all looks uniformly and reacts to all system events (light mode, dark mode, font size…) identically? We would assume that is Vision OS then. What are best practices to create such tightly coupled views of 3d objects and UI and what is the best way to actually realize that?
A down side with having vision OS as the leading UI is that one cannot prototype this anymore in Unity or reuse existing UIs from different deployments, e.g. Android.
Is there something planned like a deeper UI Toolkit integration, where we would for example create the UI in Unity, and then using PolySpatial this would translate automatically into SwiftUI views when deployed to Vision OS? This could be a great way to bring both worlds together and still support a create once - publish anywhere paradigm.
Or are there other / better approaches?