Resonance Audio SDK for Unity: Deliver High-Fidelity Spatial Audio at Scale Today, Google released the Resonance Audio SDK for Unity, a cross-platform spatial audio toolkit – for both mobile and desktop – that delivers rich 3D sound at scale. This is a big win for anyone developing for mobile, as limited CPU resources have historically prevented the delivery of rich spatial audio to those platforms. With Resonance Audio, Unity developers and sound designers alike can provide truly immersive experiences on all platforms. The Google’s SDK for Unity lets you simultaneously render hundreds of 3D sound sources into one single ambisonic stream. Resonance Audio is also jam-packed with additional cool features like scene geometry-based reverb with acoustic surface materials, ambisonic soundfield recording, and digital audio workstation-based monitoring. To learn more, check out these resources: Unity Blog. Google’s Resonance Audio: High-Fidelity Sound Across Mobile & Desktop Google Blog. Resonance Audio: Multi-platform spatial audio at scale Resonance Audio Developer Site Resonance Audio guides and documentation If you have Unity 2017.1 or later installed and are ready to add fully immersive audio to your projects, follow these steps: Download the Resonance Audio SDK for Unity. To spatialize audio sources, select the Resonance Audio spatializer in your Unity project’s AudioManager settings, then set the Spatialize property on all AudioSources that you wish to spatialize. Similarly, to play back an ambisonic audio clip, select the Resonance Audio ambisonic decoder in your Unity project’s AudioManager settings. When importing ambisonic audio clips, enable the Ambisonic property. When played back, these clips will be correctly decoded. In addition, add a Resonance Audio spatializer renderer effect to an AudioMixerGroup in your project. Name it “ResonanceAudioMixer”. In the Resonance Audio SDK, this AudioMixerGroup will already exist as a resource. Point each spatialized or ambisonic AudioSource’s output parameter to the “ResonanceAudioMixer”. To achieve optimized performance, Resonance Audio processes all audio sources internally and removes the Audio Sources from the regular Unity audio pipeline. The spatialized output is then reintroduced into the Unity audio pipeline by the Resonance Audio spatializer renderer. To apply additional audio effects to spatialized sounds, they must be applied on the AudioSource and the “Spatialize post effects” parameter must be enabled. To access additional features with the Resonance Audio spatializer and ambisonic decoder, download the Resonance Audio SDK. In the SDK, there are components that allow you to set additional properties, such as audio source directivity. For more information on getting started with Ambisonic Soundfield Recording and environmental reverb, please see the Resonance Audio SDK’s Developer Guides and documentation. FAQ for developers using Google VR Audio for Unity 1. How is the Resonance Audio SDK for Unity different from the audio spatializer included in the Google VR SDK? Resonance Audio builds upon years of experience developing spatial-audio technology for Google VR. It includes the same advanced audio technology embedded with the Google VR SDK and much more. Resonance Audio also offers cutting-edge features such as Geometric Reverb Baking (exclusive to Unity) that allow you to generate realistic audio reflections based on actual Unity scene geometry and the Ambisonic Soundfield Recording feature, which allows you to author ambisonic source clips directly in the Unity Editor. 2. I use the audio spatializer bundled with the Google VR SDK in my Unity project, so what is going to happen with my project? Google will continue to support Google VR Audio. However, if you want to take advantage of new features such as Ambisonic Soundfield Recording, you will need to use the Resonance Audio SDK for Unity instead.