I wasn't quite sure where to put this post as my target platform is an MS Hololens, but my questions are really about the Unity speech recognition API. I'd like to be able to use speech recognition in a Unity application on a Hololens with a microphone other than the microphone array built into the HL device. I've seen and been told conflicting information on whether it's possible so I'm looking for more details. The Unity speech API layer is fairly sparse. When a (Unity) recognition engine is instantiated how is the audio input device chosen? On desktop systems, I assume it's the default recording device on the sound panel? If so, is there a similar option on the Hololens? Or, an API to set the default recording device prior to creating a speech recognizer. I'm assuming the Unity speech API is a layer on top of the underlying Windows speech API. Any reason I couldn't just implement a windows speech recognizer directly? (For instance, resolving extra assemblies?) Which API is the Unity speech API built on?