How are Render Textures rendered?

Hi,
I have a comprehension question. Custom shaders cannot be transferred with PolySpatial. But I can use render textures with custom shaders. How (or where) are these shaders or textures then rendered? Is the rendering carried out in advance in the Unity engine and the result is sent to RealityKit in the form of the render texture?

My current understanding is that the Unity Engine and the “RealityKit Engine” run in parallel and PolySpatial translates the SceneGraph from Unity to RealityKit which displays everything on the Vision Pro. Input is then sent back to Unity where it is proceesed. In this loop, Unity would also “pre-render” RenderTextures, the output of which can be translated into RealityKit. Please correct me if I am wrong about this.

Thanks in advance.

Not a Unity person, but from my limited experience, it appears render textures are rendered from the Unity scene graph side, so will have various artifacts (world space offset, background skybox, etc) and any rendering quirks of RealityKit won’t be present in the render texture.

1 Like

Not ShaderLab shaders, but we do support converting shader graphs (with some limitations).

This is correct. When you render to a RenderTexture, we use the standard (Metal-based) Unity renderer. VisionOS builds in MR mode run in Unity batch mode, so the only rendering that happens must be manually invoked (using Camera.Render, for example). Then, at the end of the frame, we notify the RealityKit side that the texture has changed, and we use the TextureResource.DrawableQueue API to blit the contents of the RenderTexture into a texture that we can use in RealityKit.

Thank you for the clarification!