Hi,
The documentation for the KeywordRecognizer specifies several of them can be active at the same time as long as they use different keywords. But when I try that, they seem to block each other, and only one recognizer will trigger at a time.
Ex: With two recognizers, one for cube, the other for red, low confidence: saying each word independently works as expected, but saying both words (in any order) will only detect “cube” .
The only way for two keyword recognizers to work in parallel is if the user pauses long enough between the words, hence it cannot be used with normal speech. Is this intended, or is this a bug that can be fixed?
I’m aware there is also a grammar recognizer but I didn’t need any grammar, just keywords of different types. And the required xml grammar makes it more difficult to programmatically setup a vocabulary based on objects present in the level.
Does anyone know of a better approach or tech to solve this?