I’m wondering if there is currently support (or plans to support) windowed apps with stereo depth. Either in the style of a 3D movie, where each eye gets a slightly different view of the scene (from a fixed perspective), or as a “portal” looking through into the game world, allowing the player to move their head and peek around.
Ideally the window would be a native visionOS window, complete with rounded bezels and move/scale handles.
It feels like this would be a really compelling option for developers looking to quickly and easily port existing games to the platform.
Is there any official word from Unity on this? I may be wrong, but I’d expect this to be a pretty common use-case. With minimal effort, almost any existing game could be ported to the platform, especially side scrollers.
Are there any other developers on this forum looking to achieve a similar result? Please raise your hand!
For what it’s worth, we’ve tried a number of hacks to get this working, but all have pretty major drawbacks and don’t feel particularly “native”:
Rendering our scene to two separate RenderTextures, from slightly different perspectives. These textures are passed to a custom material, applied to a quad in a bounded volume, that uses the eyeIndex parameter to render different images to each eye:
This works, but we lack the ability to move and resize the window in a native way
It seems to put an unusually high strain on the system, with the game running at sub 10fps, which is obviously not viable.
We also don’t have parallax responding to head movements, so the illusion of depth is limited.
An unbounded volume framed by some quads with an occlusion material creating a small “window” looking into the scene beyond:
This creates the effect we’re looking for, but because visionOS is treating everything as a 3D object in the shared space, it’s fades behind the objects in your room, which looks pretty messy in smaller environments.
It’s also possible to clip your head through the occlusion geometry revealing the scene beyond.
A fully immersive VR space with a cutout window to our scene:
Unfortunately, at this time, fully immersive mode isn’t working on device, as pointed out in this thread.
I also think this could be a great way to bring certain games to Vision with a lot less effort, but still having that cool 3D effect.
Rendering our scene to … depth is limited.
I was wondering about this… Does the rendering into the RenderTextures still use RealityKit (i.e. would you still need to make every shader with ShaderGraph? or does it use Unity’s rendering engine for the RenderTextures?)
Yeah, this is another huge advantage to this approach. Because you’re still rendering your game with standard cameras, you don’t need to convert all your shaders to URP/ShaderGraph. The only material that needs to support RealityKit is the one that displays the RenderTextures on a quad inside your VolumeCamera.
This is something we’re actively working on (support for stereo render targets rendered in Unity and displayed in RealityKit).
As @harrynesbitt pointed out, yes, it would use Unity’s rendering engine.
There was a bug with RenderTextures in previous releases that forced them to use a slower, fallback path. They should be significantly faster with the 0.3.3 bugfix packages that we just released.
This is exciting! Is there an expected timeline? Let me know if there’s anything I can to do to help. More than happy to test an alpha and provide feedback if needed.
Thanks for this! I’ve downloaded the new packages, but unfortunately I’m still seeing the same poor performance. Is there a trick to getting this working correctly?
The only thing to check would be that the “Disable Native Texture” runtime flag is not set in the PolySpatial settings. When it is set, we expect to see substantially reduced performance.
This flag is not set for me. It’s possible I’m seeing a slight improvement on device compared to the simulator, though still notably low. Around 10-15fps.
Interestingly, performance is equally poor for low resolution textures (e.g. 640x360) as it is for high resolution (2560x1440), suggesting pixel count is not the bottleneck.
Another thing to be aware of is the render texture format. If you’re using an unsupported format, we fall back to the slow path (and you’ll get a warning indicating that this has happened). Our tests have been using R8G8B8A8_UNORM.
However, the fact that texture size doesn’t make a difference suggests a different problem. If it’s possible to submit a repro case in a bug report (and let me know the incident number–that is, IN-#####), that would help us debug the issue.
Another thing to be aware of is the render texture format. If you’re using an unsupported format, we fall back to the slow path (and you’ll get a warning indicating that this has happened). Our tests have been using R8G8B8A8_UNORM.
Thanks, I’ve tried this, but I get the same results unfortunately.
From profiling, I can see that the biggest bottleneck is actually coming from PolySpacialCore.UnitySimulationUpdate and PolySpacialUnityTracker.TrackObjectChanges. There appears to be an entire duplicate PolySpacial hierarchy of our scene running in the background, despite the fact that we only ever render a single quad. Is there a way to prevent PolySpacial from tracking specific objects? I’m already using a CullingMask on my VolumeCamera to restrict it to the quad.
A quick update: we tried adding all trackers to “Disabled Trackers” in the Project Settings > PolySpacial menu, except for MeshRendererTracker and PolySpacialVolumeCameraTracker and the scene runs much much better.
Of course it’s still tracking a lot of assets that we don’t ever use, so a way to exclude objects in the hierarchy (or conversely, to only track objects we tag specifically) would be fantastic.
We are interested in this too. I assumed this is a common use case and could possibly be the most popular way players play games on this platform, especially since there is a lot of focus on windows and shared spaces.
I believe this feature appears in the roadmap as “Stereo Render Targets” under the heading “Planned - 2024”. As “In Progress - Q1 2024” is the preceding heading, it suggests this won’t land for some time.
Something besides: I suppose VisionPro runs iOS & iPadOS apps natively right? If so, it would be great to be able to only deploy to iOS (less hassle) and have some flag to allow for stereo windowed functionality on VisionOS.