Render a second camera to a Render Texture in Unbounded Volume

Hi. I’m trying to build a XR immersive experience in Vision Pro where a player decorates a garden with furnitures and miscellaneous items using XR Interaction Toolkit.

2nd dimension to this experience is the onlookers: they should be able to see the experience from outside Vision Pro through a projected screen. Instead of viewing the experience through the eyes of the player through airplay streaming, we’re trying to show the experience from a different perspective within the scene.

To accomplish this, I’m trying to use a secondary camera in the scene, render its view to a Render Texture, encode the texture images and send them to a Mac through UDP protocol and wifi connection. Mac will be connected to a projector, which will project the Mac’s screen onto a mesh screen, which the onlookers can watch.

Here’s my problem: I can’t get a secondary camera to render to a Render Texture. I have set up a test method in the scene: I added the render texture to a raw image on a canvas to ensure the rendering works without the networking part. The Rendered Texture shows up just fine on editor both in and out of play mode. However, as soon as I enter either the simulator or the build through Vision Pro device, nothing shows up in the raw image that’s meant to be playing the render texture.

Here’s my question: In an UnboundedVolume scene using PolySpatial, is it possible to render a second camera other than XROrigin? If so, how would I go about achieving this?

PolySpatial version 2.0.4
Unity Version: 6000.0.26f1

secondary camera setup:

I can’t put other editor images since I’m a new user. which is a bummer.

The main thing to be aware of here is that RealityKit visionOS builds run in batch mode, which means that Cameras aren’t rendered every frame by default. There’s a simple script to render on update here.

Also, if you’re not using a Camera to render, then you need to manually dirty the RenderTexture (also described in the linked page).

Hi. Thank you for the reply.

I have tried attaching the two scripts to the camera, assigned the render texture to the ManualDirty script and tested it. However, when I run the scene in on Vision Pro using Run On Device method, I don’t see the render texture in the scene being rendered.


This is the editor scene of my setup where you can see the render texture test setup on the right. It does show in the editor, but not in Vision Pro. The manual camera render script does seem to be running every update. I have tested it with debug message and there is a meaningful frame drop when I run the scene.

If you mean on Play to Device, it’s not surprising that there’s a frame drop because the contents of the RenderTexture will be sent (uncompressed) over the network for every frame. It will be faster in actual device/simulator builds, because in that case it can use a GPU blit to transfer the texture.

I’m not sure what issue you’re running into without seeing your actual project. If you like, you can submit a bug report with a repro case (and let me know the incident number: IN-#####), and I can look into what’s going wrong.