Klattersynth TTS - Support Thread

:hushed: Klattersynth TTS
Learn more from official website of the asset: https://strobotnik.com/unity/klattersynth/

Klattersynth TTS is the first asset of its kind available for the Unity cross-platform engine:
Small and fully embedded speech synthesizer.

What features does Klattersynth TTS have?

  • It does not use OS or browser speech synth, so it sounds the SAME on all platforms. :sunglasses:
  • Dynamically talks what you ask it for.
  • Generates and plays streamed speech in real-time.
  • In WebGL builds the AudioClips are quickly pre-generated and then played.
  • Contains English Text-To-Speech algorithm (transform to phonemes).
  • Alternatively you can enter documented phonemes directly, skipping the rules for English TTS conversion.
  • You can ask current loudness of the speech for tying effects to audio.
  • Uses normal AudioSource components: 3D spatialization, audio filters and reverb zones work like usual!
  • Contained in one ~100 KB cross-platform DLL file.
  • When embedded with your game or app and compressed for distribution, compresses down to less than 30 KB. :eyes:
  • Supports all Unity versions starting from 5.0.0 and available for practically all platforms targeted by Unity.
  • Also supported by RT-Voice PRO

Why Klattersynth TTS is different from many other speech-related assets for Unity?

  • No need for the underlying platform to offer speech features (OS or browser).
  • No need for a network connection for external generation of audio clips.
  • No need to pre-generate the samples before creating a build of your app or game. The clips are either streamed realtime or generated on the fly when the app or game is running.

Visit the official website of the asset to try out a WebGL build yourself!
__https://strobotnik.com/unity/klattersynth/__

Demo videos of Klattersynth TTS:

Klattersynth TTS by Strobotnik (for UnityĀ®)

Is this considered done or are you still working on the phonemes? The Fā€™s sound more like static and Thā€™s are kind of just a pop. Also, in the WebGL demo, the base frequency doesnā€™t seem to affect whisper very much. Are there more audio tweaks available?

Hi @Obsurveyor , I wonā€™t be actively working on the sounds of phonemes. Itā€™s only a distant possibility that Iā€™d add 1-2 more later, or try to adjust them. But with this technique thereā€™s not going to be huge improvements in that area, a synth this small is bound to have a bit of limitations.

The example voices in ā€œText Entryā€ demo are made by adjusting these three available parameters: ā€œMs Per Speech Frameā€ (effectively controls the speed), ā€œFlutterā€ and ā€œFlutter Speedā€ (which can add a bit of unsteady weirdness to sound for example, although normally the flutter is just somewhat inaudible variance to the voice wave).

Hereā€™s an image from the inspector:
3179912--242462--upload_2017-8-11_10-25-34.png
(this is the ā€œSlow and unsteadyā€ voice of the text entry demo)

Very interesting, a couple of questions though. Since itā€™s being generated realtime, is it possible to adjust the actual speed/pitch realtime as well? (eg. in the WebGL demo, being able to adjust ā€œBase Voice Frequencyā€ and having it change realtime instead of having to prerender it, though I understand WebGL HAS to have it prerendered). If so, this would be PERFECT for my needs! And as for my second question - I completely forgot what it was! haha.

Hi @DbDib , youā€™re correct - WebGL has to have audio prerendered, so in WebGL builds Klattersynth will need to generate the whole clip just before playing it. It doesnā€™t take long, but it is pre-generated before actually starting to play the clip.

However, it is of course possible to just adjust the pitch parameter of the AudioSource playing the generated clip, as you can with any AudioClip. This will of course both change the pitch and slow down at the same time when you lower it (and vice versa).

When used in streaming mode, the synth will latch to the parameters given at the time of starting to speak that particular line (also the msPerSpeechFrame is locked on to at initialization time, to minimize any extra memory allocations needed later). Even real-time streamed audio is generated in batches, so fine-tuned control of parameters would need to be specified in advance (if batch size is not very small). Thatā€™s not a feature of the API now, but itā€™s a possibility for future version.

However, currently supported way is that one could simply instruct the synth to talk e.g. just a single word at a time. And just adjust the base frequency for each word to talk once previous one is finished. This would work both with streamed and pre-generated (and possibly cached) speech clips.

Does this plugin support Chinese words ?

@lzt120 , short answer: No.

Long answer: the text-to-speech only has an approximate mapping for English language and no other languages. Thereā€™s support for entering phonemes directly (documentation has list of those). It may be possible to compose some Chinese words using the phonemes directly (which would take time and experimentation). But even then thereā€™s no possibility to express tones in the pronunciation of Chinese language.

Thanks for the question.

Hey Tonic. I am getting this error: ā€œCanā€™t pre-gen speech clips while speech is being streamed (synth is active)ā€.

I am trying to pre-generate a load of speech clips using this function:

    SpeechClip [] GenerateSpeechClipArray(string[] speachStrings)
    {
        SpeechClip [] rtn = new SpeechClip[speachStrings.Length];
        StringBuilder speakSB = new StringBuilder();

        for (int i = 0; i < speachStrings.Length; i++)
        {
            speakSB.Length = 0;
            speakSB.Append(speachStrings[i]);
            rtn[i] = speechSynth.pregenerate(speakSB, voiceFrequency, voicingSource, bracketsAsPhonemes, true);
        }

        return rtn;
    }

Iā€™m not entirely sure what I am doing wrong? Do I need to wait for a short time while the speechSynth pregenerates?

Hi @IceBeamGames ,

By a quick glance that looks fine to me.

Could you verify that the speechSynth instance which youā€™re using is not playing some other speech clip right at the time when youā€™re asking it to pregenerate stuff?

Also, in case the speech synth is flagged to use streaming mode, then also the AudioSource component used by the Speech also isnā€™t allowed to playing anything when the synth is asked to pregenerate a clip.

Does the included Pangrams Example work for you? It pre-generates its clips in a batch, so you can use it as a reference. Please check the KlattersynthTTS_Example_Pangrams_Controller.cs and the IEnumerator pangramsDemo() method. Thereā€™s the if (!clipsGenerated) { ... } code block which contains the batch generation.
(Note 1: Itā€™s a coroutine, but only for update the progress info while clips are being generated - it would also work just as well without being inside a coroutine. Note 2: Thereā€™s 3 different speech synths used in the batch generation, but it works just as well if the code is modified just to use a single one.)

is there a way to see that documented phonemes before buying the pack?

@Larse232312 I sent you a private message about that.

Klattersynth TTS is now also supported by RT-Voice PRO!

3 Likes

Hi,

just imported into a new project (2019.1.8), opended KlattersynthTTS_Example_TextEntry scene,
entered text, e.g.:
ā€˜is there anybody in thereā€™

the synth completely ignores last two words (with added bonus of speaking something at the beginning if there was previously any other text entered (I think))

the webgl demo (Unity WebGL Player | Klattersynth TTS) otoh behaves rather differently, and as expected

can the package demo scene be configured to get reasonable results at least comparable to webgl version ?
if yes, why are those settings not the same as in webgl demo ?

Even single words such as ā€˜Helpā€™ are not spoken identically (with another added bonus that itā€™s sometimes apparently needed to press the enter key twice in the textbox to start the speech)

Note: the displayed settings are exactly the same, i.e. I just run the demo scene without any changes after importing

Another rather unpleasant surprise - is there any reason why itā€™s distributed as an assembly only ?
I would very much rather had access to the code esp. for cases like the above - if theyā€™re not fixable by exposed user settings.

Thanks !!

edit: grammar

Hi @r618 , that sounds like an unexpected regression. There definitely shouldnā€™t be any notable difference between WebGL version and the package. Iā€™ll investigate this.

About being DLL only, I donā€™t have current plans to release it in source code form.

@r618 I have reproduced the issue (using 2019.2 beta).

As a workaround, you can disable the ā€œUse Streaming Modeā€ setting for the Speech component:


(By default it is enabled).

The playback code for speech has two modes: streamed and non-streamed. Streamed means it will pump the speech synth frequently for data (on the fly), while the non-streamed mode will compose whole speech clip as a clip before playing it. In practice thereā€™s no big difference - even non-streamed mode is quick to compose the needed clip and play it. WebGL mode is forced to use non-streamed mode, which is why that particular regression is not likely to happen with the web mode. When I released Klattersynth I tested pretty extensively on wide variety of versions, so I guess this is probably an issue happening only with later version(s) of Unity.

So, you can use the workaround for now. Iā€™ll debug this once I have the chance. Looks like I finally must make a new release after the initial 1.0.0, as this is the first reported issue which clearly must be fixedā€¦ :smile:

k, will try it out later
so, with streamed mode the text can potentially be in range of ā€˜tonsā€™, right ?

@r618 , yes (if the streamed mode would work, that isā€¦ sorry for this issue). Although my guess is that likely you will ultimately have some reason to split speech to smaller parts and playback each when itā€™s convenient.

EDIT! NOTE:
The comment below is partially correct. But, after some further tests, I think there is no regression in Unity 2018.3. Instead it has a bit different underlying implementation, which actually fixes some of weirdness which used to need some hacks with earlier versions, and not that much anymore. Itā€™s still slightly unexpected in some minor details. I modified the text below to be tiny text, since itā€™s partly misleading.

It seems that starting from Unity 2018.3 thereā€™s a change with audioclips using reader callbacks (Iā€™d say itā€™s a regression, and I have an isolated test which shows the issue between 2018.2 and 2018.3. Sorry for not noticing this when testing for compatibility with latest Unity versions.

When reusing same streamed clip (stopping it in between), it wonā€™t anymore ask for new data at Play() but instead it will first play some old stuff it already had in the buffer. Because of this the play starts with lag (if the old data was just silence), or even with wrong audio if Unity already internally had something in the existing sample buffer.

Additionally, because of the above issue, my code which monitors when the sound can be stopped, is not in synch with what you hear. This is because since the actual new data (start of new speech) is asked by Unity only after a considerable pause (when playing the old buffer has finished).

Initial tests seem to indicate that the only way to cope with this for now is to create new audioclip every time, even when it used to work fine to keep using the same clip. This will create more memory pressure with new clips being created & deleted.

Iā€™ll see about doing a bug report, and work on a new version to with internal fix to get streamed mode work again.

Until that, the workaround is either to use Unity 2018.2.x or older, or disable ā€œUse Streaming Modeā€ when using Unity 2018.3+.

I edited my previous post, as it turns out I was able to make a properly looping audio source & custom filled clip, with reusing the same over stop/play, without unnecessary lags like I first thought there would be.

I have a working proof of concept which works both on older Unity versions (with the quirks previously needed), and on newer Unity versions just as well.

Itā€™ll take maybe a day or two until I get it integrated to Klattersynth and a new version.

Klattersynth v1.1.0 is now released

It has fixes for streaming mode when used with Unity 2018.3+. Thanks to @r618 for the heads-up about the issue.
Also compatibility with Unity 2019 is verified.

Klattersynth TTS by Strobotnik (for UnityĀ®)

1 Like