[RELEASED] LipSync Pro and Eye Controller - Lipsyncing and Facial Animation Tools


Rogo Digital’s LipSync Pro - a phoneme-based lipsyncing & facial animation system
UPDATE 10/01/2021:
LipSync Pro is no-longer available for purchase. For details on why, see [this post]( [RELEASED] LipSync Pro and Eye Controller - Lipsyncing and Facial Animation Tools page-24#post-6650110). Support for existing customers is still available, and LipSync Pro will be back as a free product some time later in the year.

LipSync Pro is a high-quality, easy-to-use system for creating phoneme-based lipsync and facial animation within Unity. It allows you to easily set up complex facial poses for each phoneme and emotion, consisting of multiple blendshapes, bones or more, and synchronise phoneme timings to audio, all inside the Unity editor. Animations can then be played back on any character with a LipSync component attached with no additional work!

LipSync Pro can also process audio automatically and show a real-time preview in the editor, making it quicker and easier than before to synchronise any audio.

Features

  • AutoSync - Automatic phoneme detection in-editor, saving you time when setting up your dialogue clips.
  • Preset System - Save and Load preset pose setups for your characters, or use a built-in one.
  • Easy-to-use editors for setting up poses and syncing audio
  • Emotions. Set up emotion poses on characters, and set blends into and out of them alongside phonemes for complete facial animation.
  • Gestures. Cue full-body Mecanim animations to be triggered as part of your LipSync animations.
  • BlendSystems allow for custom support for other character systems.
  • Bone-based animation - Support for adding bone transforms to phoneme and emotion poses alongside or instead of blendshapes, allowing LipSync Pro to be used on a wider range of character models.
  • Emotion Mixer. Easily create more nuanced expressions by blending multiple Emotions together.
  • Real-time animation preview when synchronising audio clips.
  • Pose Guides - Illustrations of how each phoneme pose should look in the component editor.
  • Marker Filtering. Show/Hide certain phoneme markers in the editor to make it easier to move/edit the one you want.
  • Create a mesh with blend shapes inside LipSync Pro, from two or more separate meshes.
  • The fastest workflow for Adobe Fuse characters - built-in presets and AutoSync allow you to get a character talking in less than a minute.
  • AutoSync batch processing - easily run AutoSync on any number of audio clips in batch mode to speed up large projects.
  • Phoneme set can be fully customised. You can now use any number of phonemes with custom names in place of the default Preston Blair set.

Currently features built-in or downloadable integration with the following 3rd party assets:
- Adventure Creator [Native]
- Cinema Director [Downloadable]
- Cinematic Sequencer - SLATE [Downloadable]
- Dialogue System for Unity [Native]
- Flux [Downloadable]
- GRML Base Models [Native]
- iClone Characters [Native]
- Morph3D [Downloadable]
- Mixamo (now Adobe) Fuse [Native]
- NodeCanvas [Downloadable]
- Playmaker [Downloadable]
- PolyMorpher [Downloadable]
- Quest System Pro [Native]
- RT-Voice (and Pro) [Native*]
- UMA 2 [Downloadable]
- uSequencer [Downloadable]

Asset Store
Documentation
Video Tutorial Series
Web Player Demo
WebGL Demo

If you have any suggestions/comments/questions, I’d love to hear them!

Cheers,
Rhys.

  • RT-Voice exports to .wav natively.
1 Like

This looks pretty nice, do you have any videos or web demo we can take a look at ?

Yes, sorry - I just forgot to include it in with the first post! I’ve added the links now. :slight_smile:

Looks interesting -

How about compatibility with Adventure Creator?

I don’t use Adventure Creator myself, but I just took a quick look at their documentation - it seems like they have a number of options built in for lipsyncing but they are hardcoded, so this alpha version isn’t supported by Adventure Creator.
I will look into offering exporting to other formats in the next version though, which should provide compatibility with assets like Adventure Creator :slight_smile:

I also plan on adding PlayMaker support in the next version.

1 Like

Looks really interesting, I’m pretty sure that with all the features that you plan on implementing, this will be the definite solution for lip-sync for unity.
Good luck, I’ll certainly be watching this one :slight_smile:

Thanks Milkeedee!

I was hoping the asset store page would be up by now, but unfortunately it’s not. I am working on the next update already though, this will include some improvements to the editors, and support for emotion markers.

Hopefully I’ll have it available soon!

Ahhhh, competition. I had been wondering when someone else would get around to providing a lip-sync solution. What you’ve got so far is decent. Keep working on the improvements, though. The next upgrade for Cheshire is on its way.

i was thinking of buying but im not sure if it will wrk with characters created in daz 4.7 and i also got a message stating some features may not be inclued in unity 5. when will this be compatiable with unity 5?

LipSync is now available in the asset store! I’ll hopefully be putting out v0.2 in a week or so too, so stay tuned for more updates.

Thanks! I looked a Cheshire a while ago and it looked very good - and competition’s almost always a good thing! :stuck_out_tongue:

If the character models export with blendshapes included then they’ll work with LipSync - import one into your project first and see if there’s a “BlendShapes” section on any of the mesh renderer components. If there are blendshapes for facial shapes/poses in there then it’ll work fine. You may need to change some export settings in Daz though.

As for Unity5 - the next update will support it properly, though as far as I know the current one should work with Unity 5’s automatic script updater. I’ll test it out and get back to you.

Brought it! thanks now I dont need motion builder :wink:

Excellent stuff. Glad it works with Fuse.

Valve’s original Half-Life game used a system where the characters mouth opened based on the volume of the audio. The louder the sound, the more the mouth opened. It was inaccurate but saved a lot of work. I don’t know if it might be an alternative option for this system.

Thanks Ukvedys - glad you like it.

Thanks! That was actually the previous system I was using, and created this to replace xD It looks OK on the kind of old-school low poly models Half Life 1 used, but often looks pretty strange on more modern characters.

I’m aiming more for high quality lip syncing with this, though you’re right about it being quite a lot of work, especially if you have a lot of dialogue. The automatic syncing I’m adding in will hopefully help with that though. :slight_smile:

I believe there is actually another plugin currently in the Asset Store that does something similar to what you are describing. (plays animations based on peaks and valleys of an audio file, mainly opening and closing a mouth) You could go that route if you wanted to.

Personally, I’m leaning more toward the combined blend-shape approach. The system you’re thinking of would be faster, and considerably more automatic. But it also wouldn’t result in nearly as satisfying results when it came to the animations. Who knows, maybe the flapping-jaw approach is exactly what you need for your game. Maybe you are specifically attempting to emulate that style. (more like puppetry than full-on lip sync)

The advantage of a blend-shape focused approach to the problem is a much more nuanced performance. Half-Life 1 went with the flapping jaw approach. But Half-Life 2 used a system very similar to what Rtyper is doing with his LipSync script. (blend shapes combined and animated together for different results) And Half-Life 2’s lip-syncing solution is still considered to be one of the best in the industry.

I’ll probably get this tool, but it would be awesome to have speech to text built in to the editor.

Thanks for the interest. Speech to text is something I’ve looked at quite a lot with regards to this, I agree it’d be a very useful addition to the tool, but there’s a surprising lack of offline, cross-platform APIs for it. I do have some ideas about it though, but I can’t promise 100% it will be included.

1 Like

Here’s a quick preview of how the emotion markers work in the new alpha. Still working on other features for it, but I expect it should be finished by the end of the week.

That’s really handy. From the short time that I’ve used LipSync, I really like it.

It is a bit of a challenge to create all of the phoneme markers, but that’s not unusual in any lip syncing software. I’d like to be able to scrub through the track without multiple unstoppable plays, but that’s not a show stopper.

Also, if you try to move a phoneme marker after you’ve stopped the clip, it doesn’t show a change until the clip is played again. And that happens with freshly created markers when the clip is stopped too. Again, it’s not a show stopper and can easily be fixed by quickly double tapping the play button to activate the clip.

All that said, I’m very happy with the current state and future direction of this tool. Thanks for making it!

Feedback like this is very much appreciated. When developing software, it isn’t always possible for the designer to catch every little quirk that crops up. Nothing is better for testing than having the software in the hands of the end-user. Somehow, end-users always seem to find every permutation and use-case that a piece of software can possibly be put through.

Thanks for all the feedback IFL! As Richard said, users are almost always better at testing software than the developer is - I suppose we probably subconsciously avoid things that might not work when developing :stuck_out_tongue:

Yeah, this is probably the biggest thing I want to change about it - it can be very time consuming right now.

This is interesting, are you using Unity5? I’ve found that in Unity5 scrubbing will start the entire clip playing and I still haven’t found a solution for it. In 4 it correctly only plays a small portion of the clip.

Yes, I’d come across this - it’s fixed in alpha 0.2.

Thank you - feedback like this really is incredibly useful!