The demo is not running at playable frame rates on my computer.
But to be frank, I mostly need for face morph and animation blendshape. Thatās the hardest and slowest part of making a human nowadays, people arenāt has regarding for anything below the neck, so deforming a base mesh is enough.
Iām personally going to use makehuman in the short term, the mesh is passable enough and export is cc0, but the head expression morph are horrible, and some stuff need some retouching like the ears. Texture is not great, but Iām not too concerned about it, studying for dark skin showed me that skin and its details can be automated to good enough results. And breaking into hue saturation and luminance components give me a lot of control over accuracy.
What remain is blendshape, I downloaded the heretic demo for reference and isolated the head morph, so far I figured out how to extract them from the baked data of unity (using bakemesh() and storing result as mesh in an array), and next is to run a series of experiments to test assumption and portability. For example looking at vector divergence from base mesh, comparing local displacement to average, etcā¦ and trying to infer rules that will help author things automatically.
I also need to look into laplacian stuff to extract data. Thatās the divergence of the gradientā¦ FOR ARTIST, TRANSLATION: the gradient is basically how things change between too neighbor elements, in an heightmap that would the slope direction between height, the divergence is simply the difference between slope neighbor, which I intend to capture as a dot product. Basically I will do a uv unwrap storing fragment model position, do gradient of that, then the divergence, and look at how it evolve from basemesh to blendshape. Which should give me something topology agnostique. I have no idea if thatās the academic laplacian mesh transform thoughā¦
I have already noticed a few things for optimization, and figure out where are the pain points. The apple facs system for example is the minimum sets of expressive blendshape, anything more is just control and accuracy on the same set, so while heretic have 300 shape, polywink 150, they are just the same shape with extras, or combine shape for ease. The mouth shape cover a large number of blendshape, it concentrates quasi all difficulty in any metrics like capture, animation or sculpting, but is also the most YAGNI if you donāt aim at cinematic realism. And solving facial performance is basically solving the mouth area first.
Edit
Also tip, if you can map any head model to the same UV, transferring data from one to another is trivial, no matter what the topology is, you just render paraneter results to texture, then sample the texture back per vertex using the uv position on the texture. Easy transfer from high poly to low poly.