sorry it’s a bit subjective, but looking for advice on options… I need to render about 8-10k individually moving instances of very simple low poly meshes (cubes, tetrahedrons etc) in an Unbounded MR space with hand tracking and environment meshing. there will be a few gameobject->realityKit entities in the scene.
Is it correct that there is no instancing support in Polyspatial/RealityKit? (no bridge to LowLevelMesh/MeshInstanceCollection)
assuming that is correct are these my current options:
Particle System in Bake to Mesh mode, use Mesh Render Mode, emit once and update each particle position/rotation in bursted C#
can use a shader that receives environment lighting/reflections
will probably be slow, meshes will not interpenetrate and might have depth layering/popping issues
ECS/DOTS/Entity Graphics
was excited for Entity Graphics support but haven’t tested it yet… ~10k entities might be a good case for it…
this reply is concerning, material overrides are a big part of ECS Gfx and “rough/untested” scares me
Stereo Render Targets
use Metal friendly compute shaders and Graphics.RenderMeshIndirect
not sure if instances can be occluded by environment, or how they will render alongside gameobject/realitykit entities
can’t use Polyspatial environment lighting/reflections in shaders
curious if I’m missing anything and welcome any opinions on the pros/cons… thanks!
That’s correct. We have plans to try using RealityKit’s one apparent instancing option, MeshResource.Instance, but that only allows transforms to change per-instance (no other properties, like color), and it’s not entirely clear to me that it uses hardware instancing anyway. For example, the LowLevelMesh API doesn’t support instancing, and we are tending to use LowLevelMesh more and more because it allows us to use Unity’s mesh data directly (reducing the mesh processing time).
This won’t help performance in RealityKit on visionOS. Although we have basic support for DOTS entities, we end up converting each entity to a RealityKit entity, so it’s functionally the same as using GameObjects. Again, this is basically because of how RealityKit works.
This is definitely an option worth pursuing. I haven’t looked at the stereo render target support recently, but I believe it supports a mode where depth is essentially turned into a displacement map, meaning that occlusion should work roughly as expected. It might even be possible to use a custom shader graph on the stereo render target mesh that incorporates the visionOS image based lighting.
Of course, another option is just using Metal mode, if you don’t need the RealityKit features (like gaze-based interaction).
This is closer to the best approach I can think of, though I actually think Bake to Texture would work better than Bake to Mesh. Whether or not you’re using Unity particle systems, the idea would be to encode the transforms and other properties (like color) into a floating point RenderTexture, then use that as an input to a shader graph that uses the vertex stage to create instances of the mesh (with the actual mesh being placeholder data). That’s exactly how the Bake to Texture particles work, but there’s nothing stopping you from using the same technique to render your own content.
It happens that we’ve recently been looking at internal content that benefits from instancing, so it’s at least something we’re aware of. It might also be worth letting Apple know via their Feedback Assistant that you’re interested in seeing support for generalized instancing in RealityKit.
I was actually just looking into MeshResource.instance and MeshInstanceCollection and it looks like these aren’t what I expected, it doesn’t look like RealityKit has any proper instancing support atm…
great to know ECS Graphics is not batching entities, I won’t pursue this route.
I do need some gaze-based interactions so I may try Stereo Render Targets as a backup.
I hadn’t actually thought of your suggestion, but I have some experience in webGL and Unity back in the day (before compute shaders) encoding transform/color data in textures, updating them in a frag shader then passing the textures to the vert shader of a placeholder mesh to render all of the instances… I’ll give it a try.
Would love to hear if you implement anything on your side, and I’ll request instancing support in RealityKit with Apple now… thanks again!