We’re excited to share that Unity’s support for new features in visionOS 2 is coming soon, in addition to new capabilities we’ve been working on. We are progressing well through our final stages of testing and integration, and are planning for the next 2.x pre-release package to be available with Unity 6 Preview in the upcoming weeks.
To recap, this release will help you create more immersive experiences and achieve greater visual diversity with the following key features:
Support for blendshapes to enable a wider range of geometric applications, including deformations and smooth, natural-looking animations.
Stereo render targets to help users achieve a wide range of effects including stereo windows and holographic 3D projections.
Hybrid apps that enable mode switching across Mixed Reality and fully immersive experiences.
Support for multiple volumes that enable a wider breadth of interactive content.
Support for entities graphics to enable more diverse visual experiences.
Shader debugging to help you create and customize shaders more efficiently.
In addition, we will be supporting visionOS 2 features including:
Rendering Metal with passthrough to build content with passthrough while leveraging Unity graphics features that are compatible with Metal out of the box.
Resizable volumes that help you achieve greater control over the scale within bounded volumes.
Dynamic lights and shadows for greater immersion and realism in Mixed Reality experiences.
Custom hover effects for greater visual control of hover interactions when a user is gazing at objects.
Improved support for shaders that enable a greater range of visual effects including the glass material appearance on SwiftUI interfaces.
Guidelines for Pre-release Packages:
While pre-release packages have stable features and APIs, there is ongoing work focused on performance improvements and overall stabilization. As such, users should not be using the upcoming 2.x pre-release package for production. Production projects should use our latest 1.x packages which will continue to receive additional performance improvements and bug fixes.
These are a few of the updates we’ve been developing for the next release, and an exhaustive list of features, updates and fixes will be shared via the release notes.
I’m a bit confused with this part: “planning for the next 2.x pre-release package to land alongside Unity 6 Preview in the upcoming weeks”.
Unity 6 Preview is already out so what does this mean? Will the next Polyspatial release only be compatible with an upcoming Unity 6 release?
Can you be a bit more specific on the timeframe? Upcoming weeks could be next week or in a couple of months…
Thanks and looking forward to it!
Thanks Luis! To clarify, the next 2.x pre-release package will be available with an upcoming patch release of the Unity 6 Preview editor. I won’t be able to provide more concrete timeframes, but we don’t expect our final stages of testing and integration to take more than a few weeks.
Cool!
I’m wondering what the pros and cons of using RealityKit mixed reality vs Metal passthrough will be.
Will it be possible to access light estimation or environment textures through ARKit to better match the real lighting conditions with Metal passthrough?
Will it support foveated rendering?
Will PolySpatial be required to use Metal passthrough?
Any other downsides to Metal passthrough compared to RealityKit that would be good to know? The obvious ones I can think of are: no built-in grounding shadows, no eye-tracked hover effects, no Play-to-Device.
I realize most of these questions are not strictly Unity related, but I couldn’t find any documentation or resources from Apple that clarifies these questions.
We’re shipping soon, so we’re locked into Unity 2022.3. Will there continue to be any updates for that version of the plugin? I’m thinking more along the lines of things like the new 90hz hand tracking data that drops with VisionOS2 rather than some of the heavy hitting new features… (tho being able to show passthru in our Metal app is tempting!!)
Hey Mike! Unfortunately, hover is enabled via Realitykit and is unavailable with Fully Immersive, which is driven by Metal-based rendering.
Regarding, your second question, we plan to continue providing stabilization and bug fixes on our 1.x packages, but we expect new features from visionOS 2 will only be available on 2.x and beyond.
We are always looking out for feedback though, so if there’s anything functionally critical that you feel will be helpful in 1.x, do let us know via our product roadmap.
No, but if you’re looking to develop content for the Shared Space, PolySpatial will still be required.
You’re hitting quite a few of the main points here, we will do our best to shed light on these in our technical documentation, but the key difference is that developing for the Shared Space requires the PolySpatial pipeline that we have in place for Mixed Reality experiences.
All in all, thanks for sharing these questions. We look forward to hearing feedback like these, so we know where we can make our documentation and templates more robust.
The dial interfaces with a system level API, but the feature we’re working on will help you have more ways to transition between Mixed Reality and fully immersive content within the application itself.
But the crown toggle is coming with one of these updates? Or do I need to write a native plugin myself for that? Because that’s not clear for me at the moment.
We currently hope to include support for the crown immersion amount in our upcoming release, though for transparency this one is currently in progress rather than finished (which is one reason it’s not called out explicitly in the top-level list). On the outside chance it doesn’t make our very next set of prerelease packages, I’d expect it to be in the subsequent one.
Honestly this is probably more a question for Apple more than for Unity, but our general understanding is that you’d need to build with latest Xcode to get all new features. That may present a problem for your upcoming release anyway, as Xcode support for visionOS 2.0 is also in beta at this time.
@IsaacsUnity First and foremost, I would like to express my excitement about the your new plan. It is indeed an impressive achievement that showcases the dedication and hard work of the you guys. While the update brings many exciting features, I am writing to share my concerns about the reliability of Unity and Swift integration, which I believe is crucial for the smooth functioning of our projects.
Our team has been using the Polyspatial Swift interface, specifically the PolySpatialWindowManagerAccess.entityForIdentifier function. We have encountered an issue since the release of plugin version 1.2.3. Unfortunately, we have been unable to successfully obtain Unity entities from the Swift side ever since the update. We have tried to follow the explanations provided by Kapolka posted on discussion, but the issue still persists.
We appreciate the outstanding work you have done so far. However, we kindly request that you prioritize the Unity and Swift integration issue we have encountered.
@IsaacsUnity , @kapolka
First, similar to other comments, really glad to hear all the effort you put towards compatibility and new feature support for VisionOS 2.0, big congrats on this side!
Second, and following the questions from Peter-Aldin, I’m still not sure to fully understand the pros/cons of metal passthrough mode vs rkit mode. So let’s assume there will be a toggle in Polyspatial to switch between both modes.
What will not work with Metal PassThrough mode:
hover effect (no gaze access),
limited access to light estimation (quality),
play to device ?
swift ui/windows access/compatibility ?
what about shader graph ?
what else are you already aware off which will not work ?
On the other side, what will be the new features enabled by the metal pass-through:
full custom (stereo) shaders ?
full lighting support (via unity rendering pipeline) ?
what else ?
Thanks in advance for any info on that. I think for many companies knowing what will come and not come will help many of us to plan on product roadmap/switch to unity 6 (on release) and also plan on feature set.
SwiftUI windows will work but the current injection path we use in PolySpatial isn’t fully hooked up (but you can do it manually).
Shader graph shaders should work (as well as hand written shaders) that are supported on metal.
Correct, you will get most rendering features out of the box that are supported on metal platforms but you can also enable passthrough rendering.
the biggest difference is that when rendering with metal only fixed foveated rendering is supported. In PolySpatial when rendering RealityKit you have dynamic foveation.