Using Face Capture with any model

Hi everyone,

I’m a filmmaker starting to play around with Unity. I found the Road to Realtime series really helpful since I found there’s not many tutorials on the subject and the ones available are either outdated or too focused on a game dev audience.

Anyway, my next step was to try to use the new Face Capture app with the Realtime Rascals, but I got stuck since the Blend Shapes renderers basically got my model out of whack. I followed the documentation but the head position started to go crazy (long neck effect), and I didn’t find a way to map the proper expressions with the correspondent blend shapes.

I also tried importing some free 3d models from the asset store and followed the Startup Guide, again, with no result. I can get basic head movement (with this weird long neck effect I tried smoothing the mapping and it kinda solved it), but the eyes, for instance, would actually disappear when selecting them on the Mapper.

I don’t know if anyone else faced similar problems and what’s the workflow for solving this. I could play around with the sample scene with no problem.

I am a hobbyist filmmaker (aka NOT a filmmaker…) trying to do simple animations with Unity (more like a lightly animated comic) - experimental trial: https://alankent.github.io/extra-ordinary-amp-stories/vseries1/uep1/

In case helpful, I have been writing up some personal notes on my blog as I learn things. So no particular structure, more writing stuff down as I learnt it (with mistakes updated on following posts). https://extra-ordinary.tv/category/unity/

I tried to get VRoid Studio created characters using HANA_Tool added ARKit blendshapes - it works with other apps (e.g. iFacialMocap) but did not work first time with Unity… but I have not had a chance to try again yet. I have also played with a few tools like DeepMotion.com (turns video clips into animation clips). What I found was if the animation clip was created using a tall model (in terms of dimensions) then played back on a short model, I got similar sounding weird neck stretching. No idea how to solve in Face Capture sorry, but calibration seemed to be the issue.

So, sorry, I have no concrete help in resolving your issues, but would love to hear progress you make! I am currently exploring VSeeFace.icu (face and upper body → VMC protocol) with EVMC4U (VMC protocol receiver in Unity) and EasyMotionRecorder (records movements into an animation clip) because they are used more commonly by the VRoid Studio / VTuber community. I plan to expand into Three D Pose Tracker (free full body tracking software), but have not got there yet. All free software, but as a result it feels like held together with string and sticky tape at times. (So I am not recommending them over Face Capture - just saying there are other tools around if you cannot get them to work.) But it is exciting to see the great progress all these projects are making! The TDPT project was exploring finger tracking straight from a webcam along with full body movements - very cool stuff.

Thank you for answering @akent99 ! I know there’s a lot of free options out there to do that right now but I was hoping the Face Capture solution was gonna be more plug-and-play that it turned out to be. I’m definitely impressed by what you’re doing and I actually read your blog this week and I found it really useful. I’m nowhere near at your level of technical knowledge though, as I’m still kind of getting to know Unity itself and I’m trying to stay in-the-box as much as possible. I just feel that all of these tools are a rabbit hole that would demand a lot of my time so I’m trying to take baby steps before I plunge in and start taking it more seriously.

I’ve been trying to understand how the Live Capture sample head works in order to make Face Capture so seamless an experience. Because when using the sample head it just works, and it looks so promising. Please let me know if you try it out yourself using a VRoid model and what are your findings. From what I understand, it just seems that the 3d model itself needs to have a Right Eye, Left Eye and Head in the object structure so they’re properly linked to the Face Capture app and the mapper can make a good use of it. But when I brought in models from the Asset Store, trying to link the mapper to those elements just makes them go whacky and then I just get stuck there, don’t know what I could try next. I’m also realizing that the sample head is just that, a head, so when trying to link it to a model with a full body, the head position makes the neck go really long and in a really weird position.

Hope there’s more people trying this out so we can help each other! Very excited about this community.