Need Help: Improving Performance of Mesh Collider Creation

Hi everyone.

I’ve been talking about this in the other topic about Unity 5 being multithreaded, but it was a bit off topic and so I figured I would make a new topic (does this belong in another section?)

I’m making a voxel engine game (nothing like Minecraft mind you, very unique). And essentially what you do in a voxel engine is you procedurally generate the mesh for each chunk. So my chunks are currently 16x64x16 blocks in size.

I calculate the vertices, the triangles, and the UVs for each block in the chunk in a secondary thread. I then have to pass those to the actual mesh in the main thread.

Now, for the collider (I’d love to be able to use the mesh collider system rather than write my own, because writing my own would be very complicated), I use the mesh I already calculated by using meshCollider.sharedMesh = (mesh I already calculated).

And this works wonderfully, except that it spikes when it does that meshCollider.sharedMesh = mesh part. It causes some FPS drop and some choppiness. And I don’t like that. So I was hoping the multithreaded physics system in Unity 5 would help with that, but apparently it won’t.

So… what could I do to make this perform better? Any ideas? If you’ve made a voxel engine using the mesh colliders (or Unity’s physics system in general), how did you make it perform better? Or did you write your own physics?

Thanks in advance.

You could always use box colliders. To avoid having a zillion of them, just make a few and position them around the player as dictated by the voxels.

–Eric

So multithreading is unlikely to be a magic cure here - even assuming a 400% improvement your 16ms budget for a frame is toast.

Rendering and physics are very different things. For example, it often makes sense to not show poly’s that the camera can’t see for rendering… however for many games if you take the same approach for physics things won’t work. Physics is usually presumed to work regardless of whether or not you can see it - even simple things like a FPS player standing on ground they cannot see.


I have two simple suggestions:

Option A) Try alternative strategies - e.g. make your collider a rigid body, use simpler colliders etc.

Option B) If you’re still stuck, a sscce.org

It is a magic cure, because it doesn’t actually matter that much when the collider is done computing. As I stated in the other thread, back when Unity allowed it I used multi-threading for this and it worked great, solving all problems…aside from the not-thread-safe-crashy bit.

–Eric

1 Like

That’s async, not multithreading. Don’t confuse the two.

It worked great… except for the multithreading aspect. Which is why I continually try to make the distinction between async and threaded. You tried to make it async by threading, and while the async part worked fine the threading part crashed and burned.

I suspect what you did would have worked fine if you had made the thread blocking(and added some locks)… but then you’d have lost the performance increase gained by async.

I’m aware of that, but if it’s just async then you don’t get any benefits from multiple CPU cores. Since mesh collider generation is CPU-intensive, it helps a lot if you can run that actually in parallel with the rest of the engine.

–Eric

Possibly, but you can only get that benefit if:

A) You’ve already got async code.
B) You need it.
C) It works.

To quote yourself:

I wanted a mesh collider to be computed without bringing the engine to a temporary halt. I didn’t actually care whether the collider finished computing by the next frame, or 50 frames from now

Threading is a non-issue, this is purely a case of async. Threading solves other issues (e.g. maximising resource usage, offloading work from main thread).

That’s pretty interesting. I could, every frame, calculate voxel coordinates for the player. And then I could check around at the 10 blocks around the player and see if there’s anything other than air there. And if so, place a box collider. If not, remove it.

Wonder how that would work… going to see if I can try to get something like that working.

Edit - thought of one issue. I use a raycast to place and delete blocks. I cast a ray from the player to where the mouse clicked, and if it hits then it places a block there or deletes one. With this system I couldn’t do that. I’d need some way of figuring out the voxel coordinates of where I clicked… hm…

I would totally go box colliders on this one.

You mentioned (in the other thread) that you only want visible sides to be collidable, but that makes no sense to me. Why is visibility related to collidability? For instance, if another player throws a rock at you that rock should hit voxels in front of you colliding with faces you can’t see. Why mix those two domains together?

Personally, I’d start with a box collider on every voxel. If that gets to be too slow then I’d look at implementing a system to batch voxels together to share colliders.

From the raycast you know a) the world position of the click’s position and b) the direction of the ray. From that you should be able to pretty easily figure out which of the voxels you clicked on - there’s a maximum of 8 voxels adjacent to any given world position, and the ray’s direction tells you which of those to select.

^This.

Or if that system gives you issues, you could simply increase the range at which you adding box colliders around the player to cover your interaction range and still use the raycast system. Unless your player can interact with things from a great distance, your not likely to hit performance issues.

It’s possible by visible voxels he means surface voxels. It’s a waste to create a collider for a voxel that’s 50 units underground, for example. I think they also use the same terminology for lighting in the minecraft thread, where visible means exposed, instead of just visible from the player’s current position.

You don’t need colliders to do raycasting; it’s not that hard to write your own when you have something well-defined like voxels. Basically you make a ray, see if there’s a voxel at that position, if not advance down the ray by the length of a voxel, check again, etc. until you hit a voxel, go out of bounds or reach the max distance you want to check.

–Eric

Right, that makes sense. It should also make it easier to get a simple optimisation system in place, right?

Also, NomadKing’s idea about swapping out aggregate colliders for individual colliders within an interaction distance is a good one. The only concern I’d have with that is that it probably means GC allocation just from moving around, rather than only as a result of editing geometry.

Did you ever get a solution to this, @JasonBricco ? I’m in exactly the same position, where everything is threaded and lovely until I add the meshCollider to my gameobject, then it takes ~90ms to synchronously bake the physics data. I’m perfectly happy to bake this myself and hand it on to PhysX, but am unable to find any information about doing so.

Bump. I have a similar issue where I’d like to hand PhysX a pre-baked mesh collider and just have it trust it without doing any computation.

2 Likes

I’ve been building my procedural mesh generation game in Unity as well, and I love C# and Unity as a tool, but lately, they’ve been focusing on all this “artist” tools and movie/cinema oriented tools, they seem to have forgotten about Game Developers… Now Unreal has just released multi threaded PhysX mesh baking… I thought I have personally settled performance shortcoming with C#, but then there are all these gotcha’s… I understand that Dynamic Mesh Generation is not a strong point of either of these game engines, but it seems UE4 is taking the advantage here…

https://www.unrealengine.com/en-US/blog/unreal-engine-4-17-released

I’m also looking for a way to better offload this from the main thread. We sadly require to load MeshColliders with vertex counts easily passing the 1.000.000 mark in our VR experience, which just freezes the UI thread. as @vbs stated, UE4 has taken their steps towards this feature. Will Unity be dealing with this any time soon?

The best thing you can do is preload them all and instead of loading and unloading them, simply change the layer to a non-collision layer which is much much faster. You could literally have a million cube colliders in the scene preloaded without effecting performance (assuming they aren’t moving with RB’s).

as the character controller moves through the scene simply update the colliders layers in batches without changing them causing and update to the physics system.

Especially if you update the layers in batches with hyperthreading, I bet you would see almost no performance usage at all updating 25000+ cubes at a time.

If that isnt an option, preload them from a certain range from the CC in ranges and have it update in the background instead of all at once.

But still making use of layers instead of loading and unloading mesh colliders, you can have a combination of both.

Give it a try :wink:

Wouldn’t it be better to do this before the level loads when you generate the world/scene instead of real time? You would have a lot more options for optimization techniques.

I was looking into that too, but now ( nov 2018) we can just disable the cooking with MeshColliderCookingOptions.None

Then you just have to make sure your procedural mesh is valid as they say in the manual