UI Best Practices VisionOS - Unity

Scenario: we want to create a 3d visualization that also requires text and other ui screens, e.g. an info panel, a tooltip pointing to a specific location, a an expandable list with meta data and more.

Concept question: Unity offers UI components and Vision OS does so as well. Which should be the leading UI system to ensure it all looks uniformly and reacts to all system events (light mode, dark mode, font size…) identically? We would assume that is Vision OS then. What are best practices to create such tightly coupled views of 3d objects and UI and what is the best way to actually realize that?

A down side with having vision OS as the leading UI is that one cannot prototype this anymore in Unity or reuse existing UIs from different deployments, e.g. Android.

Is there something planned like a deeper UI Toolkit integration, where we would for example create the UI in Unity, and then using PolySpatial this would translate automatically into SwiftUI views when deployed to Vision OS? This could be a great way to bring both worlds together and still support a create once - publish anywhere paradigm.

Or are there other / better approaches?

4 Likes

At the moment, we recommend that you implement UI with Unity’s built-in UGUI system (Canvas, EventSystem, etc.). Regarding vision OS SwiftUI components, this thread where folks are asking about using Unity as a Library has some info about how to get started there. We still haven’t explored this on our end, so all I can do at the moment is provide some general advice or pointers.

UI Toolkit is currently not supported.