Been trying to get to grips with how Raycasting works and something that bothered me was that raycasting works with colliders, not rendered meshes. However the way it seems to be used (for example in the scripting reference) is that it refers to the underlying mesh for rendering, which may not be the same. Its entirely possible to have the mesh for rendering and the mesh for collisions to be completely different (not quite sure why you would do this, but it is possible). What happens in this situation?
I guess that the RaycastHit.textureCoord just returns the UV reference for the collider mesh, as this is the only thing it knows. Then when this the associated script tries to do something intelligent to the rendered mesh it just results in garbage. Am I right? Or have I not understood how this all works?
You’re absolutely right Only the MeshCollider will actually return textureCoords, all other colliders don’t have UVs. Of course the returned coordinates belongs to the collision mesh and has no relation to the visible representation. Keep in mind that a gameobject can only have a collider without any renderer.
By default the MeshCollider uses the same mesh as the MeshRenderer, but you can use different meshes if you like. Of course if you do, the physical representation is different from the visual representation.
The MeshFilter mesh is assigned to the mesh collider when the latter is created, but after that you may assign a different mesh to the collider at any time, by script or in the Editor. The collider mesh isn’t automatically updated too, thus modifying the MeshFilter mesh by script doesn’t affect the collider - you must assign the new mesh to collider.sharedMesh in order to update it. Finally, textureCoord only works for mesh colliders - primitive colliders like sphere, cube or capsule set it to Vector2.zero.