Hello Everyone,
We’ve been working some scripts for rigging and facial animation. We think it’s come a long way, but would be very interested in your opinions.
We have a mature automatic lipsync technology that is used by game and film studios, but our unity support was weak. We retooled it that character setup is pretty straightforward. We ended up with something that works great with lipsync and facial animation, but also turned into an extensible bone based posing system. It seems pretty cool, even outside of sourcing with our lipsync data.
Demo Video
If you are interested, here is a video of usage. Disregard the speaking challenged narrator (or not!)
http://www.youtube.com/watch?v=15-TWaWnHh4
[You may need to go full screen]
If you are interested in the unity package used in this demo:
http://www.annosoft.com/unity/unity_annosoft_demo.unitypackage
More Details
A “mouth rigger” script is used to setup the visemes. Each viseme is posed in unity. Either by manually moving around the bones, or by sample capture from animations your fbx (or maya file). The animation capture is pretty slick. The visemes (or general poses) are constructed in maya, blender, or wherever, as a linear sequence. For lipsync, each frame represents the a mouth position.
We provide a script “AnimationSampler” that allows you to cycle through the animations in unity. I used the animation sampler and mouth rigger to set up the visemes in the demo. it took about 5 minutes.
The only downside to the whole thing is that the mouth rigger must be initialized with the bones list, the bones that will be manipulated, and this is a manual process. However, I think i can do this manually by analyzing the animation bones and determining which bones change. (To do!)
We also include a generic “BonePoser” which can be used to record poses by name and then show them at playback (by name too). This seems pretty useful. We are currently using it to do automatic eye blinking and brow motion (from data from the lipsync tool)
The lipsync data is sourced either from The Lipsync Tool, from our production lipsync SDKs, or from the unity realtime lipsync plugin. The scripts and unity side are cross platform and will run everywhere unity runs, but unfortunately, the Lipsync Tool on runs on win32. I’m going to put together skinny UI for the Macintosh (very soon), and introduce Lipsync Tool 5.0 for Macintosh.
If you want to test out the Lipsync Tool, please send me an e-mail. We offer 30 evaluation at no charge. The Lipsync Tool from the annosoft.com is save disabled, so you need to get the time limited trial. i can send more stuff such as recommended viseme poses, etc, to enable you to get a high quality result.
if you’ve got a project that needs lipsync, i do think this is worth a look. Our speech technology has been used in 100s of games, and i think the unity integration is looking very good too. It’s probably overkill if you just have a couple of lines of audio, but anything more than that, a really automatic solution is going to save money.
Thank you for listening!
Cheers,
Mark Zartler
Annosoft
mzartler@annosoft.com