“no spatial audio”: does that mean audio doesn’t work at all or does that mean it is not spatialized ?
“video player limited supported”: I have seen you have a dedicated PS component for it (PS Video Component). Is there any limitations about the type of video you can play (container, codec) or lifecycle/control for playback ?
“Light Probe : Manual Support”: Can you explain more about what you mean by Manual support ? RKit supports some sort of IBL as I know so I’m wondering if that is mapped to light probe in Unity or something else ? in general, what is the lighting support/control via PS (some default light controlled by the OS) ?
“UI: Canvas Renderer • Partially Supported” Can you explain the partial aspect ?
“Text Mesh Pro - Raster Only”: what do you mean exactly by raster only? are you supporting both UI + 3D text game objects ?
Does RK ground plan shadow is supported with PS?
Can you confirm that all those ARKit features are supported via PS:
Horizontal plane detection
Vertical plane detection
Anchors
Image tracking
Meshes / Scene Reconstruction
hand tracking
In the initial video announcement during WWDC there was a reference to UIToolKit while I see the samples are built around UnityUI. Was it an incorrect reference or this is something on your roadmap ?
Have you considered Unity UI over swift UI code path ? If not, have you explored any code path for being able to leverage swift UI with a Unity app ? (“unity as a library” for vision os).
Can you explain briefly which features of the XRI package are usable for Mixed Reality Apps, exclusive mode, unbounded volume ? (e.g. 3d spatial gestures will work ? )
“Light Probe : Manual Support”: Can you explain more about what you mean by Manual support ? RKit supports some sort of IBL as I know so I’m wondering if that is mapped to light probe in Unity or something else ? in general, what is the lighting support/control via PS (some default light controlled by the OS) ?
We don’t currently have any support for light probes, but it is in development (along with other forms of lighting such as light maps and dynamic directional/point/spot lights). Currently, the only form of lighting available is the image-based lighting supplied by RealityKit.
“UI: Canvas Renderer • Partially Supported” Can you explain the partial aspect ?
Not all UI elements are supported/tested yet. For example, while panels and buttons and dropdowns should work, scroll views probably will not. We are planning to focus more on UI in a future release.
“Text Mesh Pro - Raster Only”: what do you mean exactly by raster only? are you supporting both UI + 3D text game objects ?
This information is out of date. We support TMPro’s SDF text in both UI (GameObject/UI/Text - TextMeshPro, buttons, etc.) and non-UI (GameObject/3D Object/Text - TextMeshPro) forms.
In the initial video announcement during WWDC there was a reference to UIToolKit while I see the samples are built around UnityUI. Was it an incorrect reference or this is something on your roadmap ?
This is definitely on our road map and currently works at least partially, though it depends on our support for RenderTextures, which is limited at present. We will have better support and examples in a future release.
Audio does work and it is partially spatialized but not yet fully enabled with the visionOS spatialized system.
This is being worked on for future releases. Note the ground shadow is just a shadow cast from the top down.
This is something we are exploring but have nothing to share at this time.
Yes.
The plan is to have XRI supported with the spatial tap gesture so it could work in both bounded and unbounded apps. The features are mostly around enabling interactables to be grabbed and manipulated by the user as well as interacting with UI elements. This is still being worked on.
Supported and tested containers include .mp4, .m4v, and .mov. In general, container formats that are supported by both Unity for OSX (Unity - Manual: Video file compatibility) and Apple’s AVFoundation library should be supported, but may not have been tested extensively.
For control, right now there’s only a limited subset of methods you can use - Play(), Stop(), Pause(), set playOnAwake/Looping, volume control - and no way to query video playback state or set event delegates. There are plans to shift towards using the normal Unity VideoPlayer component to enable the full usage of Unity’s video features, but nothing concrete yet.
Yes! We now have a PolySpatial Lighting Node that implements a subset of Unity lighting within shader graphs (up to four dynamic point/spot/directional lights, light probes, light maps, and reflection probes).
Blockquote The plan is to have XRI supported with the spatial tap gesture so it could work in both bounded and unbounded apps. The features are mostly around enabling interactables to be grabbed and manipulated by the user as well as interacting with UI elements. This is still being worked on.
@DanMillerU3D
Is there any update about when XRI Support for Mixed Immersive Spaces is available?
Yes! We now have a PolySpatial Lighting Node that implements a subset of Unity lighting within shader graphs (up to four dynamic point/spot/directional lights, light probes, light maps, and reflection probes).
@kapolka Question on the PolySpatial Lighting Node article:
Are Reflection Probes now been supported (transfered to an RealityKit Equivalent)? Since on this article Supported Unity Features & Components | PolySpatial visionOS | 0.4.3 (unity3d.com) its still marked as “Not Supported”.
So the PolySpatial Lighting Node only provides shadergraphs the input to consume in RealityKit authored reflection probes?
Since we are not seeing that our Reflection Probe are been converted using UPR Lit Shaders. Or does a reflection probe conversion only happen when a single shadergraph with an PolySpatial Lighting Node is in the scene?
Thanks for the heads-up; I will update that documentation. We support reflection probes (only) through the PolySpatial Lighting Node.
I’m not exactly sure what you mean by “RealityKit authored reflection probes,” but it’s probably worth noting that the reflection probes we support come entirely from Unity rendering (typically, they’re baked from the scene as a preprocess). Notably, they won’t contain the visionOS image based lighting that you see when using the URP/Lit nodes–however, you can use a Lit shader graph target to combine the output of the PolySpatial Lighting Node with the visionOS image-based lighting.
This is correct; reflection probes won’t work with URP/Lit materials, only with shader graph materials that use the PolySpatial Lighting Node (which has options to enable single or blended reflection probes). Also, we’ve only tested with baked reflection probes (as opposed to dynamic ones).