Hey everyone,
I’m currently working on an app for the AVP running in Polyspatial/Shared space, which displays a sequence of meshes and textures, which are streamed/read from disk in real time. This means that the shown mesh and textures are updated up to 30 times per second.
I already know that polyspatial is particularly inefficient in these cases, as all the meshes and textures need to be mirrored to the RealityKit renderer, but even loading a 256x256 texture and around 1000 polygons 30 times a second lets the framerate drop to around 30 FPS. On my M2 Mac, this same setup runs at round 350 FPS.
Are there any best practices/workarounds, which could help speedup this process, like low-level APIs, or special pathways? A fast texture upload pathway for render textures seem to exist, but this does not seem to apply to classical textures?
I guess that the way the meshes and textures are uploaded don’t really matter, but just in case:
I read an .astc texture from disk, then load the raw texture data into an already allocated texture slot and then applying it
frame.texture.LoadRawTextureData<byte>(frame.textureBufferRaw);
frame.texture.Apply(false);
frame.frameMeshRenderer.sharedMaterial.SetTexture("_mainTex", frame.texture);
For the meshes, I’m reading the vertices and indices from disk into a NativeBuffer and then setting them with:
meshFilter.sharedMesh.SetVertexBufferData<byte>(vertexBufferRaw, 0, 0, frame.vertexBufferRaw.Length);
meshFilter.sharedMesh.SetIndexBufferData<byte>(indiceBufferRaw, 0, 0, frame.indiceBufferRaw.Length);
meshFilter.sharedMesh.SetSubMesh(0, new SubMeshDescriptor(0, indiceCounts), MeshUpdateFlags.DontRecalculateBounds);
meshFilter.sharedMesh.RecalculateNormals();
I’d be very grateful for any feedback ![]()
