Why Klattersynth TTS is different from many other speech-related assets for Unity?
No need for the underlying platform to offer speech features (OS or browser).
No need for a network connection for external generation of audio clips.
No need to pre-generate the samples before creating a build of your app or game. The clips are either streamed realtime or generated on the fly when the app or game is running.
Is this considered done or are you still working on the phonemes? The Fās sound more like static and Thās are kind of just a pop. Also, in the WebGL demo, the base frequency doesnāt seem to affect whisper very much. Are there more audio tweaks available?
Hi @Obsurveyor , I wonāt be actively working on the sounds of phonemes. Itās only a distant possibility that Iād add 1-2 more later, or try to adjust them. But with this technique thereās not going to be huge improvements in that area, a synth this small is bound to have a bit of limitations.
The example voices in āText Entryā demo are made by adjusting these three available parameters: āMs Per Speech Frameā (effectively controls the speed), āFlutterā and āFlutter Speedā (which can add a bit of unsteady weirdness to sound for example, although normally the flutter is just somewhat inaudible variance to the voice wave).
Hereās an image from the inspector:
(this is the āSlow and unsteadyā voice of the text entry demo)
Very interesting, a couple of questions though. Since itās being generated realtime, is it possible to adjust the actual speed/pitch realtime as well? (eg. in the WebGL demo, being able to adjust āBase Voice Frequencyā and having it change realtime instead of having to prerender it, though I understand WebGL HAS to have it prerendered). If so, this would be PERFECT for my needs! And as for my second question - I completely forgot what it was! haha.
Hi @DbDib , youāre correct - WebGL has to have audio prerendered, so in WebGL builds Klattersynth will need to generate the whole clip just before playing it. It doesnāt take long, but it is pre-generated before actually starting to play the clip.
However, it is of course possible to just adjust the pitch parameter of the AudioSource playing the generated clip, as you can with any AudioClip. This will of course both change the pitch and slow down at the same time when you lower it (and vice versa).
When used in streaming mode, the synth will latch to the parameters given at the time of starting to speak that particular line (also the msPerSpeechFrame is locked on to at initialization time, to minimize any extra memory allocations needed later). Even real-time streamed audio is generated in batches, so fine-tuned control of parameters would need to be specified in advance (if batch size is not very small). Thatās not a feature of the API now, but itās a possibility for future version.
However, currently supported way is that one could simply instruct the synth to talk e.g. just a single word at a time. And just adjust the base frequency for each word to talk once previous one is finished. This would work both with streamed and pre-generated (and possibly cached) speech clips.
Long answer: the text-to-speech only has an approximate mapping for English language and no other languages. Thereās support for entering phonemes directly (documentation has list of those). It may be possible to compose some Chinese words using the phonemes directly (which would take time and experimentation). But even then thereās no possibility to express tones in the pronunciation of Chinese language.
Could you verify that the speechSynth instance which youāre using is not playing some other speech clip right at the time when youāre asking it to pregenerate stuff?
Also, in case the speech synth is flagged to use streaming mode, then also the AudioSource component used by the Speech also isnāt allowed to playing anything when the synth is asked to pregenerate a clip.
Does the included Pangrams Example work for you? It pre-generates its clips in a batch, so you can use it as a reference. Please check the KlattersynthTTS_Example_Pangrams_Controller.cs and the IEnumerator pangramsDemo() method. Thereās the if (!clipsGenerated) { ... } code block which contains the batch generation.
(Note 1: Itās a coroutine, but only for update the progress info while clips are being generated - it would also work just as well without being inside a coroutine. Note 2: Thereās 3 different speech synths used in the batch generation, but it works just as well if the code is modified just to use a single one.)
just imported into a new project (2019.1.8), opended KlattersynthTTS_Example_TextEntry scene,
entered text, e.g.:
āis there anybody in thereā
the synth completely ignores last two words (with added bonus of speaking something at the beginning if there was previously any other text entered (I think))
can the package demo scene be configured to get reasonable results at least comparable to webgl version ?
if yes, why are those settings not the same as in webgl demo ?
Even single words such as āHelpā are not spoken identically (with another added bonus that itās sometimes apparently needed to press the enter key twice in the textbox to start the speech)
Note: the displayed settings are exactly the same, i.e. I just run the demo scene without any changes after importing
Another rather unpleasant surprise - is there any reason why itās distributed as an assembly only ?
I would very much rather had access to the code esp. for cases like the above - if theyāre not fixable by exposed user settings.
Hi @r618 , that sounds like an unexpected regression. There definitely shouldnāt be any notable difference between WebGL version and the package. Iāll investigate this.
About being DLL only, I donāt have current plans to release it in source code form.
The playback code for speech has two modes: streamed and non-streamed. Streamed means it will pump the speech synth frequently for data (on the fly), while the non-streamed mode will compose whole speech clip as a clip before playing it. In practice thereās no big difference - even non-streamed mode is quick to compose the needed clip and play it. WebGL mode is forced to use non-streamed mode, which is why that particular regression is not likely to happen with the web mode. When I released Klattersynth I tested pretty extensively on wide variety of versions, so I guess this is probably an issue happening only with later version(s) of Unity.
So, you can use the workaround for now. Iāll debug this once I have the chance. Looks like I finally must make a new release after the initial 1.0.0, as this is the first reported issue which clearly must be fixedā¦
@r618 , yes (if the streamed mode would work, that isā¦ sorry for this issue). Although my guess is that likely you will ultimately have some reason to split speech to smaller parts and playback each when itās convenient.
EDIT! NOTE: The comment below is partially correct. But, after some further tests, I think there is no regression in Unity 2018.3. Instead it has a bit different underlying implementation, which actually fixes some of weirdness which used to need some hacks with earlier versions, and not that much anymore. Itās still slightly unexpected in some minor details. I modified the text below to be tiny text, since itās partly misleading.
It seems that starting from Unity 2018.3 thereās a change with audioclips using reader callbacks (Iād say itās a regression, and I have an isolated test which shows the issue between 2018.2 and 2018.3. Sorry for not noticing this when testing for compatibility with latest Unity versions.
When reusing same streamed clip (stopping it in between), it wonāt anymore ask for new data at Play() but instead it will first play some old stuff it already had in the buffer. Because of this the play starts with lag (if the old data was just silence), or even with wrong audio if Unity already internally had something in the existing sample buffer.
Additionally, because of the above issue, my code which monitors when the sound can be stopped, is not in synch with what you hear. This is because since the actual new data (start of new speech) is asked by Unity only after a considerable pause (when playing the old buffer has finished).
Initial tests seem to indicate that the only way to cope with this for now is to create new audioclip every time, even when it used to work fine to keep using the same clip. This will create more memory pressure with new clips being created & deleted.
Iāll see about doing a bug report, and work on a new version to with internal fix to get streamed mode work again.
Until that, the workaround is either to use Unity 2018.2.x or older, or disable āUse Streaming Modeā when using Unity 2018.3+.
I edited my previous post, as it turns out I was able to make a properly looping audio source & custom filled clip, with reusing the same over stop/play, without unnecessary lags like I first thought there would be.
I have a working proof of concept which works both on older Unity versions (with the quirks previously needed), and on newer Unity versions just as well.
Itāll take maybe a day or two until I get it integrated to Klattersynth and a new version.
It has fixes for streaming mode when used with Unity 2018.3+. Thanks to @r618 for the heads-up about the issue.
Also compatibility with Unity 2019 is verified.