Harmonic coordinates mesh deform


I’m making something similar to the Blender mesh deform (which also uses harmonic coordinates) for Unity. It’s based on the paper “Harmonic Coordinates for Character Articulation” by Pixar and uses ports of existing projects. I’ll probably release it for free when it’s done.

https://graphics.pixar.com/library/HarmonicCoordinates/

I’ve currently ported the 2d code from GitHub - lttsh/HarmonicCoordinates: Project for INF555 (Ecole polytechnique). Articulating objects with Harmonic Coordinates (2D) and the 3d code from GitHub - Toolchefs/harmonicDeformer: Harmonic Deformer for Autodesk Maya.

A bit more information about harmonic coordinates can be found at

https://www.scratchapixel.com/lessons/3d-basic-rendering/ray-tracing-rendering-a-triangle/barycentric-coordinates
http://pages.cs.wisc.edu/~csverma/CS777/bary.html

I only have a basic running test at the moment e.g. a few points inside a cube. I’m still working on speeding it up. Dynamic mode seems to use a lot of memory and I might not implement it (supports morphs).

2 Likes

After trying it out it looks like the algorithm is really simple.
5277927--528804--WeightTest.png
Steps:
Voxel the surface of the cage mesh
Find the voxels inside the surface

For each cage vertex in the cage mesh

  • Set all weights to 0 (surface and fill)

  • Apply a weight of 1 to the cell at the cage vertex location (left image).

  • Lerp the color from 1 to 0 on all connected surface triangle cells using barycentric coordinates (middle image).

  • Blur all the fill cells, doesn’t affect the surface cells (right image has blur applied 10 times).

  • For each vertex in the skinned mesh assign the cell weight based on the position (trilinear interpolation between cells).

5277927--528807--VoxelTest.png
One interesting thing I noticed is that the original 3d sample and GitHub - mattatz/unity-voxel: Mesh voxelization for Unity. seem to ray cast each triangle to create the voxel surface (Möller–Trumbore intersection algorithm). Instead I just created a grid of points (spacing is half the cell size as shown on the left image). Then it just sets the cell based on the position (cast to int == cell position) as shown in the right image. It seems to run much faster (size 100 cubed with the Unity sphere: 1 second for triangle raycasts => 0.05 seconds for point grid) and it looked more accurate near the edges. I’m not sure why its not more commonly used. I’ll probably be ok to just create the voxel mesh on the CPU but will probably use a compute shader to blur the filled cells.

1 Like

For this test I used the Sintel lite character and have 150 height cells with 250 iterations. This takes about 15 seconds to process. The cached results are then used in a skinning compute shader. I limit the max number of control point weights to simplify the shader (currently set to 32). I still need to add weights to change between a mesh deform and standard skinning (hands / face). Inaccuracies from this method can cause the shape to change e.g. the character is slightly inflated. I might just use the error / direction to recalculate the weights.


Toon style shaders work better when the normal details are removed. Each vertex has weights to the cage mesh control points. So I tried using the weights on the cage mesh normals. This reduces the normal detail at runtime.

1 Like

This is great, thank you for putting this together!

In this test the green area uses normal bone skinning while the black area uses the mesh deform. I might need to add a low resolution dynamic version to fix issues with toon facial expressions (should be able to use normal skinning for the face and the deform cage for the normals). You can see the error between the linear skinned mesh (middle) and the deform cage (right). The deform cage is slightly inflated. I’ll post a download after I clean up a few things.

1 Like

This is really interesting stuff. What do you feel would give better performance than dynamic mode?

Also – This looks like it can be used as a tool for easier manual rigging / weight painting too.

For example, the workflow would be to make a lowpoly cage, paint the verts for each body part (i.e. each finger) a different RGB value, then transfer that value to named bone weights based on the RGB values applied per-vertex. It would be easy to select a bunch of bone indexes (automatically found when loading the model via the vertex colors), and apply a “blend” between the indexes so that no manual bone manipulation would ever have to occur. You’d get a solid rig, bones would be generated automatically based on color index, and hierarchy could be sorted later if you needed it. Perfect companion to the modular rigging we now have in Unity. What do you think?

Honestly, I always liked the idea of a rigging workflow where you could work backwards this way.

1 Like

I forgot to mention the difference between dynamic mode and standard mode.

Dynamic mode: stores the grid weight for each control point. This is fine for a low resolution grid and cage e.g. head cage with 100 vertices and a 10x10x10 grid => ~4kb. The character required a larger grid size to get triangles on smaller areas like the arms: body cage with ~600 vertices and a 150x150x150 grid => 6001501501504(float) ~8 gigabytes. This could be reduced by only storing the normalized top 32 weights per cell or by reducing the grid size but it’s still too large. To find a deformed meshes vertex position it takes the rest position in the grid and uses trilinear interpolation to find the weights of each control point. Example with 4 control points: pos = c0.posc0.weight + c1.posc1.weight + c2.posc2.weight + c3.posc3.weight.

Standard mode just process each control point with the same grid (cleared data out on each run). The results are baked for each vertex, the grid is never used at runtime. The sintel deform mesh has around 20,000 vertices. So it stores the top 32 control indices and weights for each vert => 20,000328(one float, one int) => 5mb.

Thanks for the suggestion, I never thought about using it as a rigging / weight painting tool. After thinking about it for a while it sounds like it could be quite useful. One nice thing about the mesh deform is that it reduces mesh intersection. When I was testing “ optimized centers of rotation skinning ” the skinning weights weren’t correct so you’ll see the leg mesh passing through the pants. In the deform video before you’ll notice that they don’t intersect (both examples used auto skinning). It sounds like generating the bones should be easy (might need to adjust the bone up directions afterward). Even if the bones are manually setup it would still be useful to try using the deform cage as a skinning template e.g. auto skin different clothing. I might try it out if I have some free time.

1 Like

That sounds awesome! – I’m dealing with this clothes skinning problem right now actually.

Yep, the up directions should probably be done in something like an outliner/hierarchy area. That is, maybe provide a local/world space toggle for the generated bones. Base this up direction on bone chains (rather than the individual bones themselves).

Here are some facts about bones generated this way:

  • Bones are defined in the outliner via color.

  • Bones are arranged in the hierarchy based on order of selection in a multi-selection list.

  • Bone centers are defined based on center of a volume of points for all verts colored with a particular bone ID.

  • A rough start/end point is generated for each bone based on hierarchy of color ID or using chain/hierarchy data if necessary (see next bullet point).

  • Special bones (i.e. start/end bones in a chain) can have their start or end positions offset locally to determine the “start” of the first bone or “tip/end” of the final bone when they are the first or last in a set of bones defined as a chain.

  • Bone that are offset as a “start/end” point are used automatically to determine the direction/location of all of the bones in that particular chain hierarchy.

  • Bone lengths are determined by vertex positions relative to their chain direction (if they are part of a chain), otherwise the direction can be tweaked by rotating it around

  • Bone length for individual bones is always automatically determined by the last vertex in the volume of the current bone ID’s direction/orientation.

  • Bones part of the same chain will have their start/end bone positions automatically determined by their hierarchy and start/end points.

Hopefully that gives you an idea of how I’m thinking this might work.

Honestly, everytime I think about it, the very fact that I have to build the rig first has always seemed like a waste of time to me since the “bones” concept is strictly user-facing – the game engine only cares about the weighting of individual verts in the end, so having to create bones is only a convenience if you plan to later attach something to the model or need some control points to drag around to pose it and keep track of the vert deformations.

Ultimately, selecting multiple bones and dragging them around (or applying a hierarchy sorting function to them) in an outliner to determine bone hierarchy after painting bone IDs on the mesh (that determine which verts the bones would influence) would be much easier than manually fiddling with rotation, scale, parenting, or painting weights for tiny bones (such as the fingers) by hand. An outliner of bone names/colors that act as their ID is really the core to this. Just use the selection order of the “bone” names in a list of multi-selected bones to arrange the generated “bones” into a hierarchy (i.e. simply RMB → “Arrange in Hierarchy” in that outliner). The bone origin could be determined based on the center of a point volume + offset (if you wanted to tweak an individual bone center/start/end point manually), then bone direction and length could be easily be derived from the left or rightmost verts of a “start” bone in the outliner hierarchy and pointed to the “tip” of an end bone in the outliner hierarchy. These would be assumed to be positioned in space relative to one another (the last selected bone’s potential “tip” would determine the end point direction of all the center points of the individual bone ID point volumes, and the first selected bone’s “start” point (taking into account its offset) would determine the overall starting point of all the bones in the chain, then push this point out toward the next “center” of the nearest bone(s), and adjust the length to the overall paint job for each section (maybe using “rings” of vertices to help guide you if necessary, but otherwise the manual offset of a bone’s “start” could be used for things like curved fingers, for example). Something like this would allow you to get a general bone chain “positioned, parented, and pointed” without having to the tedious process of fiddling with creating the bones themselves – you could simply paint their IDs on the points and have your rig be generated based on clouds of points you’ve defined by painting them (which you’ve already got), their center volumes (volumes determined by vertex color / bone ID), and the relationships between each of those volumes’ points to determine bone length and direction.
I would consider adding one extra layer (i.e. alpha channel) to use black/white to determine the amount of blend to be applied to each of the nearby verts weights for mixing/fading influence (so that the colors themselves would be straight 100% the RGB value you applied for each vertex) so you won’t have to fool with selecting which bone to blend with which other bone - as the alpha channel applied to that vertex would determine this blending amount – the color channels themselves would simply set the bone id to use.

The closest analogue is Geodesic Voxel Heat Diffuse Skinning, which is similar, and seems magical, but is actually still tedious since it is being applied to an existing bone rig that still has to be setup in a tedious way. This workflow is (and has always been) backwards to me since the “bone” concept still a user-facing construct that has no bearing on what is actually happening behind the scenes, and as such, simply “painting” the verts that should belong to each bone with a different color, then painting in a blend value from black-to-white (using the vertex alpha channel), seems much more straightforward to me.

Besides, allowing one to generate both the bones and the rig at the same time would save loads of time when you’ve got a character you might prefer to be more “manual” with yet still don’t know exactly how you need your rig to be setup (and would like to try out and tweak skinning in-engine until you’ve got the blending correct). Being able to quickly regenerate the weights (and the rig) using nothing but colors, point cloud volumes, and point/distance relationships to determine the bones, and then an outliner that shows bone-chain hierarchy, letting you reorder and apply hierarchical relationships to a multi-selected set of bones quickly (based on the order they were selected), would be an amazingly useful rigging tool. I hope you get time to play around with this! – Something like this sounds very promising! :slight_smile:

1 Like

Thanks for the detailed reply, unfortunately I won’t have much time to work on it for a few months but hopefully someone else might be able to start it with the sample project and your description. One thing I was curious about was animation in a system like this. Would you export the final rig back into an animation program like blender or use a more generic animation system like the modular rigging system you mentioned?

I’ve included the test project. The default scene shows the skinning (mainly the sintelDeform/GEO-body.001/MeshDeform component). I had to remove the animation because it made the upload too large. Unhide the “sintelBuilder” game object to generate the weights for the “sintelHarmonicMesh” asset (takes about 18 seconds). The code can be used in a commercial game but don’t want to see anyone selling this asset.

5293662–531345–HarmonicCoordinates.unitypackage (3.92 MB)

2 Likes

I can easily see use cases for both options (depending on how someone might want to actually create animations), but an FBX would always be more versatile. I would personally prefer a bridge into the modular rigging system (since that’s where an in-engine feature like this would really shine), but you’d need to keep track of the rig hierarchy and the model hierarchy to make that work, and this can be painful. An FBX option (i.e. just reimport the model when the skeleton was regenerated, and have the option to replace it, preferably with two buttons to do one or both) would probably be more simple.

If you used something like Addressables, you could quickly import a shell mesh from anywhere on your hard drive if you needed the shell around the model to map the bones to clothes or extra appendages. You could discard the shell model when you’re done once you’ve used it to map the bones/clothes (all without ever having to deal with manually importing it into your project and then having to later delete it). So much time/energy saved. :slight_smile:

1 Like