Detecting if mouse is over bone and rendering bone

Hi there, complete Unity neophyte here with 20+ years general coding experience.

I brought in a rigged mesh just fine but I see that in order to select and pose individual bones I have to select a bone from the heirarchy in the inspector then manually enter new rotation values.

For a whole bunch of reasons I want to create a utility that will allow me to pose the mesh in unity (and do other things based on bone selection) in a manner that more closely resembles programs like Daz Studio and Poser. i.e. Highlight individual limbs as I move over them, by highlighting the affected polys and/or preferably the bone itself represented as a line, click to select a bone and use sliders and/or 3d GUI controllers to change their orientation and do other stuff.

Among the ā€œother stuffā€: Apply algorithmic deformations to the polys affected by the selected bone in a manner that relies on the idea that the bone is an actual vector, with a start and end point.

For selection/highlighting, I guessed that I would be able to get a screen projection of the bounding box of each boneā€™s imagined vector and just check if the pointer was in that bounding box, then draw a line to show the selected bone.

But when I started digging into the heirarchy of skinInstance.Bones I discovered itā€™s a bunch of transforms, not vectors. My initial thought was ā€œthatā€™s ok, I can just examine the transforms of child bones to get endpoint and therefor the ā€˜lengthā€™ of a boneā€ but further contemplation made me realize that is inadequate, both because a bone can have multiple child bones with different offsets and because leaf nodes of the skeleton (head, fingers, toes) only have a starting point not and ending point using this method.

Is this a Quixotic quest? is there any way of pulling off what I want to pull off? The concept of a bone being an easily discernable 3D vector wasnā€™t just important for my posing goals, I conceived of a whole algroithmic deformation system that fundamentally relies on it.

ā€œJust do it in Blender then importā€ or anything similar is most definitely not the solution Iā€™m looking for. Also my desired utility will work on one and only one skinned mesh at a time and Iā€™ve created an orbital camera that mouse orbits around that mesh already

If youā€™re just looking to highlight whatever bone is being moused over, it might be better to approach it from the mesh side of things than the transform side of things. In a perfect world, you would raycast to get which part of the mesh the mouse is over, find the nearest vertex in the mesh, and use the meshā€™s boneWeights array to determine which bone you are mousing over.

The biggest issue with that: I have not yet figured out how to raycast against a mesh (if that mesh does not have a collider, and skinned meshes usually donā€™t). So, perhaps you could ā€œfakeā€ a raycast: Loop through the meshā€™s vertices, convert them all to screen space, and find a vertex that is within a certain range of the mouse pointerā€™s position. You could use the distance to the camera to make sure youā€™re selecting the vertex nearest the front. After that, the logic is the same. The downside to this approach, it would probably only work on relatively dense meshes well (otherwise the algorithm would fail to detect when youā€™re in the middle of a large triangle).

This is all sort of spitballing here, and Iā€™m not sure if it would actually be functional or not. Hope it helps anyway.

1 Like

The spitballing is appreciated, although I was hoping for a solution that didnā€™t involve doing that. I was hoping for bones as vectors because it makes the detection a thousand times easier. Looping through screen bounding boxes of a few vectors projected into screen space is obviously wayyyy faster than looping through all the tris. And obviously you want a fast response for a posing tool.

I already had a long, hard look at raycasting to a tri and read several threads on such attempts on this forum and elsewhere. Iā€™ve tried adding a mesh collider and updating it to reflect animations for raycasting to a tri, (a) that is slooooow and (b) there are problems with blendshapes and (c) it doesnā€™t meet my morph logic requirements.

Among other things I want to, say, be able to pick two points along a bone and apply a sine or square wave deformation to tris influenced by that bone, but only the tris that fall in a radial area of influence around that section of the bone (or a partial arc) - this is for rapidly creating curved or square extrusions and depressions along limbs, torso and head so that modelling simple clothing or additional musculature is easy - the idea was to not only deform, but get the texture coordinates of the deformed area and have a kind of ā€œpaint on morph/blendshapeā€ feature, then save the resulting differential morph process and associated texture overlays as clothing items so that they can be algorithmically applied to instances of the mesh in a wardrobe system.

This would allow the creation of say, a runtime, saved shirt or pants or boots or gloves process which can be applied to different instances of the mesh that have different blendshapes applied. IOW because the clothing is implemented algorithmically, the same clothing would work for a short and fat or a tall, skinny character. Typical mesh replacement schemes for wardrobes donā€™t allow this. But aside from the clothing, just easy modelling of things like musculature (which most often are just sinusoidal extrusions)

Iā€™m now wondering if I should just add my own companion ā€œboneā€ objects that run the length of each boneā€™s area of influence and keep references to their affected tris. Although obviously Iā€™d like to algorithmically generate those initially if itā€™s possible, even if some manual tweaking is required.