Video Player - ImmersiveViewingMode

Is it possible to access ImmersiveViewingMode in Unity?

Currently, the VisionOSVideoComponent feels very limited. For example:

  • We cannot use alpha channels in immersive videos because the material gets overridden.
  • There is no way to access the current playback position (seek) of the video, which is a fundamental feature when working with videos.

Apple Vision is marketed as a media and storytelling headset, yet video control in Unity is surprisingly underwhelming. I hope improvements are on the way to address these gaps.

Indeed, there is the native video player, but when working with high quality video, the render texture method is causing enormous lags…

Yeah, I would like to be able to play videos with depth from Unity for the AVP. Right now there is no way to do this as I have seen in the tests I have done, right?

There are currently two ways to play depth-enabled videos:

  • Using Unity’s native player, you can play a Side by Side or Top Bottom video, render it into a Render Texture, then use a custom shader with the “Eye Index” node in Shader Graph.
  • Using the VisionOSVideoComponent player, you can play a video encoded in MVHEVC, which directly encodes stereo. Videos recorded in spatial mode with an iPhone 15 Pro or later are natively encoded in MVHEVC.
1 Like

Hi,

Thanks for reaching out and sorry to hear you’re having trouble!

As you’ve noticed, VisionOSVideoComponent currently heavily relies on RealityKit’s native AVPlayer for video playback, and as far as I’m aware, AVPlayer does not support playing MV-HEVC with alpha channels. I know AVPlayer does support non-spatial videos (HEVC) with transparency, but I’m not sure when that feature will be brought over to MV-HEVC, and I would encourage filing feedback to Xcode to push for it.

I can certainly take a look at what it would take to allow control over ImmersiveViewingMode through Unity - at a short glance, I think we’d have to change some underlying architecture to support it. ImmersiveViewingMode is provided through VideoPlayerComponent and we’re using just plain old AVPlayer so users can apply the VideoMaterial to arbitrary meshes.

For both this feature and for better support around video seeking, I would encourage you to submit both ideas to the road map to signal support for these features.

Alternatively, if you’re interested in just displaying a video without actually applying it to a specific mesh, you may be able to write some Swift code to create a VideoPlayerComponent and attach it to a Unity-created entity. After an Xcode build, you should be able to access the video file at ../Data/Raw/VisionOSVideoClips. You should be able to access the entity corresponding to a specific Unity instance id through PolySpatialWindowManagerAccess.entitiesForUnityInstanceId() - please see the HelloWorldContentView.swift file contained in the SwiftUI sample scene to see specific examples of how it functions.

You will need to delete the ModelComponent on the entity corresponding with the Unity-side GameObject MeshRenderer that VisionOSVideoComponent is being applied to, and then add a VideoPlayerComponent to it:

entity.components.remove(ModelComponent.self)
let videoPlayerComponent = VideoPlayerComponent(avPlayer: player)
entity.components.set(videoPlayerComponent)

Alternatively, you can disable VisionOSVideoComponent entirely by using PolySpatialSettings - Disabled Features. You’ll need to manually import the video file into your Xcode project, but you won’t need to delete the ModelComponent.

I’m very sorry this couldn’t have been a more affirmative answer, but hopefully this’ll help defray the feature gap somewhat!

Hello,

Thank you for your response.

I will try this, but I find it to be a very complex solution for a rather simple problem, which is just adding alpha to a spatial video.

As mentioned in a previous post, there is an application developed on Xcode that most likely uses the native video player and is able to play a video encoded in MVHEVC with alpha on the sides.

Hi,

I’ve been taking a look on my own time at the app you linked and based on what I think they’re doing, you may be able to get a similar effect without writing any native swift code.

If you add an Unbounded volume camera to the scene and then set the ImmersionStyle to Progressive (the setting is located in ProjectSettings->Apple visionOS->Reality Kit Immersion Style), then you should be able to rotate the DigitalCrown to control the level of immersion and fade the video clip over passthrough. See docs here for more information. With this approach, you should be able to just use VisionOSVideoComponent as-is.

Hopefully that helps!