SALSA Lipsync Suite - lip-sync, emote, head, eye, and eyelid control system.

Hello,
Please send us an email with your Invoice number and the version of Unity, SALSA, OS, etc.

Also include a video of the avatar exhibiting the animations/behavior you are describing as well as screenshots of the configurations and model hierarchy in your scene. We will take a look at it and see what sticks out as the problem.

Thanks,
D.

1 Like

Hey Darrin!
I need some help form you and your team.
Ive been able to re-create your setup on the main 7 Visemes as close to the their positioning as possible.
Its on a bought model.
But the rest(first Viseme) that has the default face expression (closed mouth). but when i press play to test, the mouth opens, about half way, a 50 on the component blendshape min/max Max value. its like it has a hidden value somewhere as its default. Only way to close the mouth in realtime is to put the Max value at -50.
What could be the problem?
In the scene, and playmode is turned off, preview on, the mouth is closed in preview diplay mode. its as it should be.
the blendshape values on the model in its inspector is at 0. and the trigger tick mark is at the Rest Viseme in playmode(realtime), but the mouth is halfway open. what could be the problem?
/Stefan

Hi, sounds like you have a problem that is similar to some of the other model systems. Check out this doc page to see if it helps. Mecanim & Other External Influences - SALSA LipSync Suite v2

Also, it sounds like you might have created a ‘rest’ viseme. You actually don’t want to do that. The ‘rest’ viseme is achieved by all other visemes being OFF, which SALSA will manage. So, if you have 8 total visemes, remove the ‘rest’ viseme.

Hope that helps,
D.

2 Likes

Hey!
Ill try. and thanx for the link. im on it as we speak.
Thanx!

1 Like

It worked. removing the jaw from the rig on the model worked.
now its just a mixamo problem for the body.
but thats another issue. Thanx btw!

1 Like

Hello, I have had your asset for years ! While I have been away from Unity , have been back since Nov 2023. ( my page say’s March 5th 2021 ) Please excuse the OCD … I over do the explaining a lot!

Anyway, I am not sure if this is the behavior you want, but on your landing page for This youtube | website | unity forum | documentation | email The link for the Unity forums is a dead end … page does not exist, the link is wrong … and if you do not want the forum link then maybe take it out ?

It is great to see you are still keeping up with updates to this ( I will make time for a review some time Soonish ) I have been slowly doing reviews for all the devs that have been here since Unity 5 and are still supporting their assets … " Huge respect " thank you!

P.S I remember dimly, when it first came out… I had a bit of a problem with CC3 and you were great and fixed it. Thank so much, Mark

1 Like

Thanks for the heads up. Definitely not intended behavior.

D.

1 Like

Hello,

I’m working on a Unity project that requires real-time lip-sync in a WebGL build. Instead of using Unity’s AudioSource, we are playing a live voice stream via the Web Audio API.

Project Overview:

  • Started: 2014, recently resumed
  • Previously Used: SALSA 1.2.0 & RandomEyes 1.2.0
  • Target Platform: WebGL
  • Audio Playback: Now using Web Audio API instead of AudioSource

Question:

Would Amplitude for WebGL v1.2.4 and SALSA LipSync Suite v2.5.6 work for real-time lip-sync when playing audio via the Web Audio API, without AudioSource?

Thanks in advance for your help!

Hi @YanaVV, the short answer is no. The long answer is probably, but you would need to feed the data or analysis back to SALSA. SALSA can be configured to use external analysis in a couple of different ways, poking the data into SALSA’s analysis value directly or providing the data for SALSA to analyze itself. You can check out the delegate documentation for SALSA here: Delegate Processing - SALSA LipSync Suite v2

You can also see documentation detailing an implementation of a filter chain in this documentation: Custom Filter Chain - SALSA LipSync Suite v2

Of course bridging the gap between WebGL’s WebAudio API and Unity code will be necessary to make this work.

D.

I keep getting this error –
Cannot get data on compressed samples for audio clip "". Changing the load type to DecompressOnLoad on the audio clip will fix this.
– even though I have indeed changed the load type to DecompressOnLoad.

Details:

  • Building for iOS, testing on an iPad Air
  • I am using native Swift code to generate an audio file, passing the filepath back into Unity, and then creating an AudioClip to use with SALSA
  • I’m using UnityWebRequestMultimedia.GetAudioClip to fetch the audio using the filepath
  • I’m grabbing the DownloadHandlerAudioClip and setting compressed to false
  • I have verified that the resulting AudioClip’s loadType is DecompressOnLoad
  • The AudioSource plays the audio as expected, but Salsa lip syncing does not work and I get the above error
  • I’ve tried with both .wav and .aif file types

Any help is appreciated. Thanks!

Hello,

My guess is that you are possibly trying to use the audio clip before it’s state is “ready.” In Web Audio API, which is the audio framework that Unity uses on WebGL, it is possible to play an audio clip before it’s state = ready, but it is not possible to analyze data from that clip before the state = ready.