Though I suspect I know the answer, but what about going from windowed to shared/mixed reality? For example having a game where all the usual menus and such are in windowed mode, but then gameplay would switch to a volume and then exit back?
+1 I’m really interested by this answer. Do you have any news ?
Yes this is supported with the latest updates to the volume camera configuration. We don’t fully support windows yet but having an app that moves between the shared space (volumes) and the immersive space (mixed reality) is possible. You can just have two different scenes that have different volume camera setups and load them in unity.
Because of how we support VR mode (metal compositor) we don’t support the ability to go between a VR app and an app in the shared space (polyspatial app) or mixed reality (polyspatial app).
An app can transition between shared space (volume) and immersive space (mixed reality) and back whenever it chooses.
Hope this helps clear things up a bit.
Thanks for clearing up the state of things. Its a bummer that anyone building VR games (using metal, for full stylistic control) won’t be able to utilize the platform level gaze + pinch within their apps. I think users may be confused by this or think that VR devs are being lazy on the design side, so I hope this can make it onto the roadmap if technically feasible.
And FYI, if anyone is wondering if gaze + pinch will be available directly in full VR apps, it seems the answer is no for now as well. More on this here:
Can we get VR mode for polyspatial apps then as an alternative to metal compositor ( with all the pros and cons that go along with it ) ? Our App ( mainly a mixed Reality app that sometimes need to switch to VR) would greatly benefit from that.
Would it potentially be possible using “Unity as a package” to have our app load an intro screen in mixed reality (say created in SwiftUI), then have a button that would load the fully immersive VR experience made with Unity?
I’ll note that as feedback but it’s not currently in our development plans.
Theres some discussion around using “unity as a library” here Unity as a library on VisionOS - #3 by timonweide-sap but I don’t think it’s fully working yet.
HI @DanMillerU3D, thanks for the key information on this thread. On Oct 23, you mentioned
We don’t fully support windows yet
but I haven’t been able to track down documentation on what’s supported vs what isn’t. Specifically, we were wondering if the following is supported, or will be soon:
- resizing windows using the VisionOS resize control in the lower-right corner.
- multiple windows (bounded volumes in a shared space) concurrent in a scene.
- material similar to Glass, which gives translucent properties recommended by Apple.
Thanks again for your help!
thanks for the explanation. Does it mean Unity won’t support switch between MR and VR ? So it means we have to do 2 apps, one for MR (shared and Immersive) and one for VR ?
thanks for the reply,
@DanMillerU3D , any info shareable on the question if Unity will support MR and VR in the same project on VisionOS ?
Hi @DanMillerU3D Thanks for clarifying.
Is switching between VR and MR mode something you are planning to implement in the future or it’s simply not possible?
We’re building an app which would greatly benefit from switching between the modes.
I asked the Apple guys yesterday during my lab session in London about switching at runtime between immersive space and full immersive space and they said it’s totally feasible natively so I suppose Unity could do it also… @DanMillerU3D, what is the status of this feature ? Thanks
Currently due to the architecture between how we support mixed reality (polyspatial) and how we support virtual reality (metal compositor) it is not possible to have a single app that can transition between VR → MR or MR → VR.
I recommend using our recently launched visionOS roadmap to surface this idea.
Please tell me, am I correct in understanding this?
Unity PolySpatial Shared Mode correspond to visionOS volumetric windowhttps://developer.apple.com/documentation/SwiftUI/WindowStyle/volumetric.
Unity PolySpatial Exclusive Mode correspond to visionOS mixed ImmersiveSpacehttps://developer.apple.com/documentation/swiftui/immersionstyle/mixed.
With Unity PolySpatial:
Unity convert Unity’s object to visionOS’s RealityKit.
So there is a chance can transition between with PolySpatial
volumetric window<->mixed ImmersiveSpace
volumetric window<-> full ImmersiveSpace
mixed ImmersiveSpace<-> full ImmersiveSpace
But… ,Without PolySpatial:
Without Unity PolySpatial which mean we don’t use RealityKit in full ImmersiveSpace,
Unity use CompositorServices and render with Metal.
I think it’s hard that Unity can support RealityKit and CompositorServices same time.
So it is hard transition between without PolySpatial
Hi Dan has there been any consideration to support VR WITHIN polyspatial (using RealityKit components and the reality kit renderer). This would give us the functionality we want (eye gaze/hover component). Is there any way to render a skybox while in MR mode using polyspatial.
If you want me to submit this as feedback somewhere please let me know the best place to do so
Hi @logansitu –
Nothing prevents you from adding sufficient virtual content to obscure the physical scene - whether a skybox or otherwise. There’s a bit of a question about how to place virtual content with respect to the real world (if you want any intermingling at all) but I think all the tools you need for this are already available. But did you have particular Unity features in mind that would further help with this approach?
I’ll also note the “Stereo Render Targets” feature (slated for 2024)may prove valuable here, as it would allow you to render that skybox in real time using Unity’s full suite of rendering features. This would further augment the above approach allowing you to use your existing VR shaders, graphical effects, etc. rather than converting to shadergraph and working around feature limitations. Hope that helps!
on the product roadmap in the upper right there is a + Submit new idea button
You can place a inverted sphere around the user but it will have a visual affordance when it clips the real world (faded out).
I think support for the progressive immersive style that presents a skybox that the user controls with the digital crown could also be a potential solution,
The switch to AR/MR mode is basically something we already do in mobile devices with AR foundation. Why isnt that a possibility here? (Even if it would still limit functionality like eye tracking data, it would allow us to use existing content).
So this is still not available right? I wonder if there could be some weird hacky work-around since apparently there’s the progressive immersion mode for the AVP that kind of supports this?
I’m also wondering how games like Synth Riders is achieving this effect. Since they do have both ‘VR’ and ‘MR’ modes. I wonder if they’re literally creating a giant inverted box with skybox texture for the VR mode
currently: somehow, blend shape animations are only supported in virtual mode
being able to transition between mixed mode and blend shape animation-possible virtual mode would be great!