I’m using WebRTC 2.4.0-exp.5 inb my project, and I’m trying to use audio received from WebRTC and use it as the input for OVRLipSync.
My code is similar to the one in the reference here Audio streaming | WebRTC | 2.4.0-exp.11, and I can receive and listen to the audio, but OVRLipSync doesn’t seem to be able to use it when setting the same AudioSource as input for it.
I wanted to try to use the PCM/float data from WebRTC’s AudioStreamTrack instead, but it seems this isn’t possible. Could anyone tell me if there is a way to do this currently?
I also tried using Unity’s OnAudioFilterRead, but it doesn’t seem to get called.
I changed the order of the AudioSource and the script that includes OnAudioFilterRead in the GameObject, but it didn’t change the result. If you have any idea what I might be doing wrong, it would be really helpful.
Thank you
Edit: just in case I wanted to add that even though I am receiveing and listening to the audio, the AudioSource has no AudioClip set. I think that’s probably how the WebRTC library works though.
I think I managed to fix my problem. I found that when I placed the WebRTC plugin’s AudioCustomFilter script component in my GameObject, my other script’s OnAudioFilterRead started working. I don’t know why this was needed, since it doesn’t seem to be mentioned in the documentation and I found it by luck after looking into AudioStreamTrack’s code and the scripts included in the plugin.
I tried placing OnAudioFilterRead inside VRAvatarWebrtcSynchronizer or LipSyncHandler, but the data array was always all 0, even though the audio was being received from WebRTC.
After adding AudioCustomFilter like in the screenshot, OnAudioFilterRead data was fixed and I am able to use the data in OVRLipSync.