Hi everyone,
I’ve got a question regarding Audio in Unity. For my bachelors thesis I need to implement a way to let some avatar be able to speak with the voice of KoljaB’s realtimeTTS Python library.
Calling via pythonnet turned out to be a fluke as it was basically impossible to debug with my current skills, so I tried it via websocket and ran into a wall as AudioClips apparently cannot simply stream Audio via websocket and either need files via http/https ready for consumption or files that are ready to play in the editor.
Current plan now is to send text to the python server, asyncronically start generating audio, chop it in single small wav files, send a small message via websocket when they are ready to be retrieved and then play them via UnityWebRequestMultimedia.
I hate everything about this, not gonna lie. xD
But I also can’t imagine that this is the correct way to do this; does anyone have an idea that they could point me to?
Thanks and have a great day!