Multiple KeywordRecognizers

Hi,

The documentation for the KeywordRecognizer specifies several of them can be active at the same time as long as they use different keywords. But when I try that, they seem to block each other, and only one recognizer will trigger at a time.

Ex: With two recognizers, one for cube, the other for red, low confidence: saying each word independently works as expected, but saying both words (in any order) will only detect “cube” .

The only way for two keyword recognizers to work in parallel is if the user pauses long enough between the words, hence it cannot be used with normal speech. Is this intended, or is this a bug that can be fixed?

I’m aware there is also a grammar recognizer but I didn’t need any grammar, just keywords of different types. And the required xml grammar makes it more difficult to programmatically setup a vocabulary based on objects present in the level.

Does anyone know of a better approach or tech to solve this?

This seems entirely consistent with all of my experience with speech recognition, all the way back to 1992 roughly.

Sorry I can’t add anything else to it. See if you can capture audio that works in one condition (single recognizer) and then send that same precise audio to the other recognizer, then send it to both, see if that reveals anything. That would at least eliminate slightly differences in how you say stuff from run to run.

1 Like