Search Unity

  1. Megacity Metro Demo now available. Download now.
    Dismiss Notice

Resolved KeywordRecognizer issue workaround

Discussion in 'VR' started by johnny_littlepunch, Aug 26, 2021.

  1. johnny_littlepunch


    Apr 23, 2021
    Hi, fellows. In a project - Open XR Plugin + XR Interaction Toolkit for Win 10 (standalone, not UWP) + HP Reverb G2 - I am using UnityEngine.Windows.Speech.KeywordRecognizer and found strange behavior. Everything works fine as long as you don't take off the headset, even for a short time (say, just 1 sec) - and it no longer responds to words. In details. I create and initialize the recognizer, and also subscribe to the OnPhraseRecognized event in on primary button press handler, than, while holding this button, I say a command word and, after recognition and execution of the command, I release the primary button and in that release handler stop recognizer if running, unsubscribe, dispose it and even set the corresponding variable to null. So, I completely create and destroy the recognizer when the headset mounted on the head. When removing the headset and remounting again, the recognizer does not exist (at least in my code). So, my guess is when you remove a headset, the system detects this and fulfills some actions that prevent further correct execution of KeywordRecognizer. Accidently, on new the primary button press there are no any exceptions - recognizer is successfully created and running, according to its status. But events OnPhraseRecognized are not sended anymore. This issue actually prevents KeywordRecognizer from being used - you don't want to dwell on a command just because you took your headset off for a while, which is quite common. Thus, my questions are:

    1. How to make KeywordRecognizer working again after headset removal and mounting again?
    2. How does the system respond to headset mount and removal?
    3. How to detect headset mount and removal?

    Do you experienced such behavior? My script in attachment.

    Attached Files:

    • Menu.cs
      File size:
      4.3 KB
    Last edited: Aug 27, 2021
  2. johnny_littlepunch


    Apr 23, 2021
    The main secret of this riddle - do not use UnityEngine.Windows.Speech for implementation in no circumstances, if you want a predictable, stable and responsive result!

    Instead, makes sense implement speech recognition and synthesis using VS in separate apps with interprocess communication to main Unity app. Apps are running in background, so users will hardly notice them. In my case, at least, all works great ))

    Classes SpeechRecognitionEngine and SpeechSynthesizer (from System.Speech.dll) are much more functional than Unity offers. MemoryMappedFile and Mutex do really work (e.g. NamedPipeServerStream throws NotImplemented exception in Mono for x64 architecture).

    It is important that Microsoft documentation for these classes contains clear examples. If somebody needs more direct examples I can share, it is quite short.

    If you have a better solution, it would be interesting to know.
    Last edited: Nov 9, 2021