We are developing an application based on the MR environment using Vision Pro.
By assigning rendertexture to the output of the main camera, we want to output the environment that the user sees as a render texture.
However, when I assigned a render texture to the main camera’s output and displayed it on the canvas, I found that nothing was output.
Even though I added a script that continuously updates the rendering, it didn’t help much.
Is it possible that cameras in VR and AR environments cannot handle more than one task at a time?
If you’re using the RealityKit backend, then the Unity Camera objects aren’t used for rendering by default. Instead, the Volume Camera determines which part of the scene is visible (and at what transform). So, yes, you should be able to configure the Camera to render to a RenderTexture and display it in, for instance, a Material used by a MeshRenderer, or the texture of a UGUI RawImage. Because PolySpatial apps run in batch mode, you do have to call Render on every frame to update the texture, and there’s example code to do this in our documentation on RenderTextures. If you can’t get this to work, feel free to submit a bug report with a repro case (and let us know the incident number: IN-#####) so that we can see what might be going wrong.
If you’re using the Metal backend, then yes, you should probably use a separate Camera to render to the RenderTexture.