Why two separate components for SpatialMapping?

May someone explain to me why there are two separate components for SpatialMapping? Are the Spatial Mapping Renderer and the Spatial Mapping Collider not doing the same thing except the fact that one script adds Mesh Renderers and the other one adds Mesh Colliders to the surfaces?

Using both at the same time would produce twice as many meshes as necessary, wouldn’t it? So why is this a good idea or am I missing something?

Hi derdemi, it is a little confusing but hopefully, this helps.

The Collider enables a mesh to be interacted with, it essentially gives the mesh a collider that fits the shape of the mesh itself and not just a bounding box collider or some other collider shape you would find in the add physics component menu. However, this mesh cannot be seen without its counterpart the renderer.

The renderer enables a mesh to be seen, but not necessarily interacted with. No collider in other words. You can apply shaders and materials to this as you see fit but it will still not have any physics interactions without its counterpart.

To see and interact with a surface you do need both components and this does introduce a performance hit on the device. Many apps, such as Fragments from Microsoft, get around this by having the user scan their space and then store that on the device. This allows them to shut off the components during play and open up those system resources again.

A bunch more info to read up on here if you haven’t seen this yet.

Hope that helps! Feel free to hit us up with further questions

First of all, thank you for your answer. I do understand what the components are actually doing. And they are perfectly fine if you just need to use only one of them. What I do not understand though is why the functionality was separated into two actual components. There has to be a lot of redundant memory consumption and computing power usage. Let me try to explain.

Please correct me if I am wrong but the standard workflow in Unity is to have one single GameObject for a specific non-primitive Mesh in the scene. This GameObject would have a MeshRenderer and a MeshCollider component attached, both of which are using one single mesh as their source, provided by a MeshFilter component.

However, the Spatial Mapping components create two separate (though functionally identical) SurfaceObserver objects to repeatedly access spatial mapping data on the device and instantiate twice as many SurfaceData (= mesh data) objects as needed. So we have doubled API calls, doubled memory consumption and computing power usage for the same task. Only at this point the Spatial Mapping components add a renderer or a collider to the SurfaceData objects respectively.

Having it all done by a single Spatial Mapping component, there would only exist on single instance of each SurfaceData in the scene. Further, because every Surface has a MeshFilter applied, one would easily be able to add a MeshRenderer and/or a MeshCollider to it, each of which could be disabled separately on demand to cover each use case.

That would also eliminate the potential issue to not set up the general settings of each component equally. And if unequal scan settings are an actual use case, one could easily just use two separate Spatial Mapping components and enable or disabe the rendering and collision respectively to mimic the current behavior.

You bet, anytime! One thing to note is that you could write your own script to handle this behavior. What Unity has currently, in component form, is a simpler “plug and play” mentality for less experienced developers so they have an easier time getting up and running by just adding the two components.

One additional tip, if you’re struggling with performance, is to adjust the SR resolution which I’ve seen done in apps that require live-scanning to be persistent while the app runs if storing the room scan is not a suitable option.

If you want to dig in the low level scripting to create your own solution, this might be helpful
https://docs.unity3d.com/Manual/windowsholographic-sm-lowlevelapi.html

What does “SR resolution” stand for? Actually Spatial Mapping doesn’t seem to be much of a problem performance-wise. Instead I have huge problems with the rendering of the Unity UI. I experience constant frame rate drops down to 30 fps, especially if I use a Scroll View.

Do you have text intersecting with and of the UI components? I.E. text on a panel, that might be the cause. If that is a case we might have am issue on the HoloLens.

Can you send a example screen shot please

Sorry for the late response. I am not sure how to determine any intersection. I have a hierarchy of Rect Transforms as child objects of a Canvas. I use the standard Text (Script) UI-Component.

Here is a screen shot from the scene view:
Screen Shot

You can ignore the simple audio player interface because the issue persists even with text components alone, although it’s less noticeable then. The frame rate drops are more significant with more UI elements I guess.

The frame rate drops depend on the distance between the camera and the canvas. Coming closer makes it worse and moving away some meters fixes it, which is quite strange. This performance problem only happens with a scroll view in place - there is no issue without it whatsoever. I guess the actual problem could be related to the masking because disabling the Mask (Script) gets rid of the issue completely. But obviously the scroll view is pointless without the masking.

Starting a Mixed Reality Capture (or other heavy load stuff) causes the frame rate to drop to 30 fps too. The issue is not noticeable then because it does not decrease the frame rate any further. That’s also quite an interesting fact I guess.

Furthermore:

  • Moving the Text objects on the local z-axis away from the Canvas has no effect.
  • I couldn’t reproduce the issue in the Editor.