Hello, I’d like to ask about an issue regarding UI. I’ve noticed through a demo that each UGUI element generates a separate Mesh during runtime. If I have a large number of UI elements, will UGUI batching still be used in runtime with Vision Pro? If there’s no batching and there are, for instance, 500 elements, won’t the rendering overhead be quite significant? Is there a more optimized solution for creating complex UIs using UGUI?
We have no insight into how the VisionPRO is actually doing the rendering at the level you are asking about. I would assume that they are doing batching or some other set of optimizations but that is not within the are we can work with. We just tell RealityKit what there is and assume that Apple is doing the best at optimizing rendering of their scene graph.
Thank you for your response. I have a few more questions to ask:
Has Polyspatial tried creating a single Mesh for the entire UI? Would this approach optimize rendering efficiency?
I attempted to create a complete UI RenderTexture using the Camera’s Output Texture. It works in the Unity Editor, but not in the simulator. I’m not sure why.
After using Polyspatial, replacing the UI’s Material seems to be ineffective. How should this be handled?
Has Polyspatial tried creating a single Mesh for the entire UI? Would this approach optimize rendering efficiency?
No but it is something that we have thought about. We do plan on looking into this at some point.
I attempted to create a complete UI RenderTexture using the Camera’s Output Texture. It works in the Unity Editor, but not in the simulator. I’m not sure why.
Can you clarify what you mean by “not working”?
After using Polyspatial, replacing the UI’s Material seems to be ineffective. How should this be handled?
Not sure what you are trying to do here. Can you clarify what you mean by “replacing the UI’s material”?
When I use a RenderTexture generated from a Camera’s OutputTexture and assign it to a Mesh’s Material, the mesh is not visible in the simulator.
Replacing the UI material means: in a UI, there is an Image Component. When I need to apply some special effects to this Image, I replace the Material of the Image with a Material that I created.
This is likely because visionOS apps run in batch mode, and thus their cameras don’t render automatically. We use a simple script to do this on each update in our internal testing:
using UnityEngine;
using Unity.PolySpatial;
public class BatchModeUpdateRenderer : MonoBehaviour
{
Camera m_Camera;
void Start()
{
m_Camera = GetComponent<Camera>();
}
void Update()
{
if (Application.isBatchMode && m_Camera)
m_Camera.Render();
}
}
(we’ll include this in the docs in a future version)
Only Shadergraph shaders are supported on visionOS and currently Unity UI does not support shadergraph shaders (2023.2 introduces this), so you won’t be able to use custom shaders on image components rendered by visionOS