I am trying to update the shape of a Mesh Collider based on a rigged Skinned Mesh Renderer and as far as I understand you can only do this by writing the vertex positions to a Mesh using BakeMesh() and then making that the Mesh collider’s sharedMesh.
I was hoping someone could explain to me why SkinnedMeshRenderer.BakeMesh() is needed - as far as I understand this method recalculates all the vertex positions using the bone weights and the rig’s pose etc and is therefore a pretty intensive operation, but surely this has already been done at some point - after all the animation is playing already on the screen - those vertex positions have been calculated, does this really need to be calculated from scratch again?
My issue is that doing this every frame on a mesh with ~13k tris is taking my frame rate from ~150fps to ~45fps on quite a powerful laptop.
It seems to me that would be possible to have a SkinnedMeshRenderer.deformedMesh available somewhere? Or does BakeMesh() literally take that posed vertex data and convert it to Mesh data and not recalculate everything from scratch?
Any help greatly appreciated!
Thanks!
Rob
PS Is it possibly that the process of writing the deformed vertex data to memory is the costly bit and this is not needed in order to render the deformed mesh? Is that why it needs to be recalculated over again?
Yes, you have to. The SkinnedMeshRenderer does the actual skinning on the GPU using the actual bone Matrices for the skinning. The vertex positions have not been calculated on the CPU side.
Even if that would be the case, the physics system does not really support any kind of soft body physics. We only have rigidbody physics. So a single object is considered a rigid object. An animated object would change shape. The Physics system is not really designed to handle that. The MeshCollider actually has a seperate representation in the physics system. Changing the collider through BakeMesh may work, but it would depend on the kind of deformations and what kind of collisions the object is facing. You don’t get proper collision responses since the object didn’t “move” to cause a collision but you actually telepoted / morphed the collider into an overlap position and the physics system simply applies penalty forces to seperate them.
Yes, that’s usually the best solution for bone-based animated meshes. Though such colliders would usually be used as triggers for hit detection and not collisions. They may be used for collisions when you turn the character into a ragdoll. Though in that case every bone would have its own rigidbody and you connect them with joints.
Well, that’s way too general. MeshColliders are the only colliders which can give you texture coordinates at the hit point and allow you to properly interpolate any vertex attribute at the hit point. Also for the static environment, a single meshcollider actually performs pretty well.
A deforming MeshCollider wouldn’t make that much sense as colliders for a rigidbody should be convex. I think in the latest Untiy versions you can not use a non convex mesh collider on a rigidbody (3d physics).
I know youre right, and one day ill even need a mesh collider probably. I haven’t ever needed one yet and every time i give them a try, they disappoint me and I manage to get what i want from primitive collider composition.