Frame drops in render to texture sync between Unity and RealityKit

We have an issue with syncing our render to texture with the Reality Kit rendering.
For our game we have a RealityKit plane (through PolySpacial) displaying a stereo image of our game by using a shader that renders a different eye perspective for each eye to add depth to the game.
We are rendering the game at 90 fps in 3960x1800 resolution to a RenderTexture which includes left and right eye texture.
We think that because the texture has to be streamed to RealityKit some frames are being dropped, resulting in the effective FPS displayed in realityKit to be lower than the 90 frames we submit per second from the Unity renderloop. With lower resolution it’s much smoother which makes us think that it takes longer than 1/90th of a second to stream over the render texture to be displayed in RealityKit. But lowering the resolution any further is not an option.
Is there a way to synchronize the texture faster? Or any other techniques to improve effective framerate in unity?

Are you saying that the display frame rate is dropping below 90fps, while the update rate is maintaining 90? That would surprise me; I would think that the update thread would be the bottleneck, since that’s where we do all the processing for PolySpatial and copy the RenderTexture. We do have some ideas about optimizing that path (notably, we currently do a GPU blit from a texture created by Unity to a texture supplied by RealityKit, and would like to have Unity render directly into the texture from RealityKit), but I don’t have any particular advice in the meantime. It’s something we would like to optimize in the future.

We assume it is but it’s hard to test for us so we are not sure. We have enough horsepower to reach 90 FPS when profiling.

Before we had a setup with one RenderTexture per eye so we had two separate RenderTextures. And some frames the left eye had a different frame than the right eye. So to make it sync correctly we merged the eyes in one texture. We think that if in one frame the two textures could be different, we could also miss frames for the same reason.

We also assumed that the render loop of RealityKit is not in sync with the render loop of Unity. And that the texture needs to be streamed.

Your solution sounds like the thing we need. So that would be great!

Yes, that’s correct. There’s no render loop per se on the Unity side for MR visionOS builds; apps run in batch mode and target 90fps for updates, but any rendering (to RenderTextures, e.g.) has to be done manually (as I’m sure you’re aware). The rendering runs in an entirely separate thread (or set of threads) that RealityKit manages and that we have no real control over, and assets like textures and meshes have to be streamed to those threads. It’s pretty easy to see the update rate by inserting an ad-hoc FPS counter to measure how often Unity calls the Update() method on Monobehaviours, but measuring the actual rendering rate is trickier; the advice we’ve been given from Apple is to use the RealityKit Trace instruments in Xcode and look at the frame render times. At any rate, we’ve mostly been focused on improving the update rate, since that’s what we have the most direct responsibility for.

The one thought I did have is that mip map generation might be slowing things down, if you have that enabled (though I believe it’s disabled by default).

This is something we’ve run into as well, and our upcoming release should make it so that multiple RenderTextures are synchronized, so you might try that method again to see how it affects performance.

Thanks for the rundown on how it works!

Our issue seems to be fixed!! It renders really smooth now!! We think it’s been fixed after our update to 2022.3.15. Was the rendertexture sync part of that already?

No, but it’s great to hear that it’s fixed! The only change I’m aware of going into 2022.3.15f1 was a fix that we made for a memory leak, but that may have affected RenderTexture performance as well.