I’ve seen some windowed vision apps loading spatial 3d models and interactable 3d UI outside the window constraints. I’m wondering how this is achieved. Thanks.
Can you tell whether the 3D content inside a separate volume window, or is it in the immersive space? Or, if it’s a Unity app, it’s probably achieved with an immersive space and a render texture that approximates or recreates the effects of a window.
In either case, we’re considering adding support for multiple volumes, which would enable this kind of functionality. You can cast your vote in favor of it under “Improved Camera Support” on our public roadmap.
These apps weren’t in windowed mode in the end, but immersive mode. I think I’ve figured it out.
When the app runs you can see a window, but it’s made with SwiftUI, giving me the impression it was in windowed mode. From there you can populate multiple 3d models/UI around you in the shared space. Something I think it’s not possible in windowed mode.
Will ever be possible to achieve that UI look and functionality in immersive space with Unity?