Help with getting pixels back from PolySpatialVideoComponent, with or without RenderTextures

We’re having trouble getting the PolySpatialVideoComponent to play nice with us. We use video as part of decoding volumetric video for Arcturus’ HoloSuite player. Unlocking this codec will significantly improve the live action content possible on VisionOS.

The PolySpatialVideoComponent doesn’t have a way of getting the current time of playback. Because of this, we need to encode the current frame into the video itself as a bytecode.

However, PolySpatialVideoComponent also doesn’t seem to have a way to read pixels from the target renderer.

Additionally, if you point a camera at it with a render texture target, and render the texture, it shows up as white in the render texture.

In this image, there are three planes with textures displayed on them. From left to right: the first is the quad with the PolySpatialVideoComponent targeting it. The second is a quad with a render texture from a camera pointed at the first quad. The third quad is from a camera rendering the whole scene, pointed at the first two quads and the empty content.

How I’m interpreting this is that PolySpatialVideoComponent is creating a texture only on the realitykit side of things. The texture never makes its way back to unity runtime. I’m also intepreting that render textures are taken of the “ghost/mirror” scene that exists alongside the realitykit scene… are these interpretations correct? Is there any way (by hook or by crook) to get the PolySpatialVideoComponent texture via script? Even if thats a native plugin? Please - let me know :slight_smile:

This is correct. The PolySpatial Video Component is a proxy for RealityKit’s VideoMaterial, and thus the video is only loaded on the Swift side. We are currently working on support for Unity’s VideoPlayer component, which will allow rendering video to a RenderTexture that will be available both in Unity and RealityKit.

Re the temporary component: Whats the likelihood that the next update could expose a reliable currentTime / currentFrame variable from the AVPlayer?

In addition, do you have an ETA for the updated VideoPlayer could be ready, or anyway I could support moving that along?

@vcheung-unity may be able to answer this better, since she created the PolySpatial Video Component. I suspect, though, that getting this in a release short-term isn’t too likely; we have a lot on our plate at the moment, and we’re hoping to switch most use cases over to the VideoPlayer.

We don’t have a time estimate at the moment, sorry. However, it’s not likely to be too far off, as it’s not too far off from working.

Thanks, this is good to hear. Key requirements on our side are:

  • correct frame number / time reporting, or
  • ability to read (a small number of) pixels back to the CPU

I am not able to load video through PolySpatial Video Component, only quad texture changes to black screen in simulator. I have added Video Clip and Target material renderer in component. Does the video clip needs to be in specific path ? I have placed video clip under, Resources/StreamingAssets/PolySpatialVideoClips/

It should be under <Project folder>/Assets/StreamingAssets/PolySpatialVideoClips.

If that still doesn’t work, please submit a bug report and let us know the incident number (IN-#####) so that we can investigate.

Hi!

As kapolka mentioned, I think the best and most reliable bet for your needs is going to be the normal Unity video player once it’s been updated to work. Getting the normal Unity VideoPlayer component will require a new Unity editor version and a PolySpatial package release, which is part of the reason why we can’t give a super accurate time estimate quite yet - once the work is merged in, I should be able to tell you at least which editor version to keep an eye out for.

I’ve created a Jira ticket to track updating the PolySpatial Video Player component with currentTime, but it is unlikely to be added in a release short-term. Additionally, I haven’t looked too much into the details, but I am slightly worried about any delays in getting the frame data/time data, as it takes some non-zero number of frames for commands to be relayed to and from the swift side, and you may run the risk of getting outdated frame data/time data by the time PSL VideoPlayer updates with that information. As I said, I haven’t tested this yet, so it may ultimately be an insignificant delay, but there will be delay, as opposed to the Unity VideoPlayer component, which has full control of the video.

1 Like

Hi there:

Is there any update on this? We’re still blocked in development by this feature.