Search Unity

  1. Get all the Unite Berlin 2018 news on the blog.
    Dismiss Notice
  2. Unity 2018.2 has arrived! Read about it here.
    Dismiss Notice
  3. Improve your Unity skills with a certified instructor in a private, interactive classroom. Learn more.
    Dismiss Notice
  4. ARCore is out of developer preview! Read about it here.
    Dismiss Notice
  5. Magic Leap’s Lumin SDK Technical Preview for Unity lets you get started creating content for Magic Leap One™. Find more information on our blog!
    Dismiss Notice
  6. Want to see the most recent patch releases? Take a peek at the patch release page.
    Dismiss Notice

Audio Resonance Audio SDK for Unity: Deliver High-Fidelity Spatial Audio at Scale

Discussion in 'Audio & Video' started by michaelberg, Nov 6, 2017.

  1. michaelberg

    michaelberg

    Unity Technologies

    Joined:
    Jan 11, 2017
    Posts:
    5


    Resonance Audio SDK for Unity: Deliver High-Fidelity Spatial Audio at Scale

    Today, Google released the Resonance Audio SDK for Unity, a cross-platform spatial audio toolkit – for both mobile and desktop – that delivers rich 3D sound at scale. This is a big win for anyone developing for mobile, as limited CPU resources have historically prevented the delivery of rich spatial audio to those platforms.


    With Resonance Audio, Unity developers and sound designers alike can provide truly immersive experiences on all platforms. The Google’s SDK for Unity lets you simultaneously render hundreds of 3D sound sources into one single ambisonic stream.


    Resonance Audio is also jam-packed with additional cool features like scene geometry-based reverb with acoustic surface materials, ambisonic soundfield recording, and digital audio workstation-based monitoring. To learn more, check out these resources:



    If you have Unity 2017.1 or later installed and are ready to add fully immersive audio to your projects, follow these steps:

    1. Download the Resonance Audio SDK for Unity.

    2. To spatialize audio sources, select the Resonance Audio spatializer in your Unity project’s AudioManager settings, then set the Spatialize property on all AudioSources that you wish to spatialize.

    3. Similarly, to play back an ambisonic audio clip, select the Resonance Audio ambisonic decoder in your Unity project’s AudioManager settings. When importing ambisonic audio clips, enable the Ambisonic property. When played back, these clips will be correctly decoded.

    4. In addition, add a Resonance Audio spatializer renderer effect to an AudioMixerGroup in your project. Name it “ResonanceAudioMixer”. In the Resonance Audio SDK, this AudioMixerGroup will already exist as a resource. Point each spatialized or ambisonic AudioSource’s output parameter to the “ResonanceAudioMixer”.

    5. To achieve optimized performance, Resonance Audio processes all audio sources internally and removes the Audio Sources from the regular Unity audio pipeline. The spatialized output is then reintroduced into the Unity audio pipeline by the Resonance Audio spatializer renderer. To apply additional audio effects to spatialized sounds, they must be applied on the AudioSource and the “Spatialize post effects” parameter must be enabled.

    6. To access additional features with the Resonance Audio spatializer and ambisonic decoder, download the Resonance Audio SDK. In the SDK, there are components that allow you to set additional properties, such as audio source directivity.

    For more information on getting started with Ambisonic Soundfield Recording and environmental reverb, please see the Resonance Audio SDK’s Developer Guides and documentation.


    FAQ for developers using Google VR Audio for Unity

    1. How is the Resonance Audio SDK for Unity different from the audio spatializer included in the Google VR SDK?

    Resonance Audio builds upon years of experience developing spatial-audio technology for Google VR. It includes the same advanced audio technology embedded with the Google VR SDK and much more. Resonance Audio also offers cutting-edge features such as Geometric Reverb Baking (exclusive to Unity) that allow you to generate realistic audio reflections based on actual Unity scene geometry and the Ambisonic Soundfield Recording feature, which allows you to author ambisonic source clips directly in the Unity Editor.

    2. I use the audio spatializer bundled with the Google VR SDK in my Unity project, so what is going to happen with my project?

    Google will continue to support Google VR Audio. However, if you want to take advantage of new features such as Ambisonic Soundfield Recording, you will need to use the Resonance Audio SDK for Unity instead.​
     
    Last edited: Nov 6, 2017
  2. laurentlavigne

    laurentlavigne

    Joined:
    Aug 16, 2012
    Posts:
    1,873
    Does support webgl?
    Screen Shot 11-06-17 at 11.16 AM 001.PNG
     
  3. Panagiotis-Kouvelis

    Panagiotis-Kouvelis

    Joined:
    Mar 7, 2009
    Posts:
    16
    Congratulations, great news!

    Already downloading at the SoundFellas.com game audio labs!

    Looking forward to start using it and be part of the community.

    Cheers!
     
  4. slufter

    slufter

    Joined:
    Jun 12, 2013
    Posts:
    20
    Exciting stuff! I really enjoyed the demos. :)

    I have a question about pairing Google Resonance with Native Audio Plugins, such as the examples by Unity located here: ( https://bitbucket.org/Unity-Technologies/nativeaudioplugins ).

    Is it possible to process native audio plugin effects with Google Resonance? (I may be phrasing this poorly, please bear with me.) For example, in the plugin demos I've linked above, there are some demo synthesizer plugins, such as the "DemoTeeBee303". If I understand correctly, they generate sounds as an effect – not your typical audio source with an audio clip. Can this be processed by the Google Resonance spatializer plugin?

    I hope I've expressed that clearly enough, I'm still pretty new to all this. Any thoughts would be a huge help. Much thanks!
     
  5. laurentlavigne

    laurentlavigne

    Joined:
    Aug 16, 2012
    Posts:
    1,873
    The audio positioning is incredibly good.

    It's so good that even using JBL monitors, not headphones, I can pinpoint accurately where a sounds come from even from the back. Wow.

    In a shooter or an RTS with a large map, locating threats is usually done with visual cues. With audio imaging this good, visuals can be replaced by audio, like in real life.

     
  6. Peter77

    Peter77

    Joined:
    Jun 12, 2013
    Posts:
    2,356
    I tested v1.0.0 (the .package file) in Unity 2017.1.2p2, using the provided ResonanceAudioDemo.unity scene.
    However, there was no audio was hearable first.

    I had to open "Edit > Project Settings > Audio" and change "Spatializer Plugin" and "Ambisonic Decoder Plugin" from "None" to "Resonance Audio", as shown below.

    This made the example demo work for me.

    audio_settings.png
     
  7. laurentlavigne

    laurentlavigne

    Joined:
    Aug 16, 2012
    Posts:
    1,873
    It's one of the tricks that are in the google documentation. I usually don't read those but this one is concise and complete...

    ...Except that I still wonder if occlusion objects can be given a material to provide an absorption rate. Baked occlusion have as a material linked to a material, not dynamic occluders it seems.

    PS: dynamic occlusion! wow! to do that, change the resonanceAudioListener's occlusion mask to everything and add a cube in the room, then move behind the cube voila!
     
  8. hippocoder

    hippocoder

    Digital Ape Moderator

    Joined:
    Apr 11, 2010
    Posts:
    22,231
    Hi!

    Does it work with consoles and for traditional 3rd person games (non VR) ? I'd like to build my whole game with rich audio, and for larger levels....

    Thanks.
     
  9. Peter77

    Peter77

    Joined:
    Jun 12, 2013
    Posts:
    2,356
    It's really nice feature, already played with it :)

    It's been implemented via Physics.RaycastNonAlloc, so... not sure how fast it is if many audio sources enable occlusion.
     
  10. Peter77

    Peter77

    Joined:
    Jun 12, 2013
    Posts:
    2,356
    According to the blog-post, Consoles don't seem to be supported (yet?):
    Non-VR applications seem to be supported. I built and ran the provided example project without VR hardware and it seemed fine to me (Windows 10 desktop).
     
    hippocoder likes this.
  11. hippocoder

    hippocoder

    Digital Ape Moderator

    Joined:
    Apr 11, 2010
    Posts:
    22,231
    I'd like official verification that all relevant consoles will be supported or it's a complete non starter for us.
     
  12. ceebeee

    ceebeee

    Joined:
    Mar 7, 2017
    Posts:
    138
    I'm curious how this compares with SteamAudio, which I'm currently using. If I see any real advantages, I may switch over.
     
  13. laurentlavigne

    laurentlavigne

    Joined:
    Aug 16, 2012
    Posts:
    1,873
    Question time:
    1. Is the transition between overlapping audio probe lerped through time instead of position?
    2. Audio profiler shows cpu% is at 8% on a 1700x, are these 8% on another thread?
    3. Max distance has no effect on sound, is volume how you let the spatializer determine the sound's reach?
    4. if distance has no effect on the sound's volume, how is spatial property affecting the sound with this plugin?
    5. How do I simulate atmospheric muddying out the sound? Same trick of cranking up the "spacial" curve and adding a low pass filter or something specific to resonance?
    6. looking through the code, a float is sent to resonance for occlusion and resonance does it magic, how can we tweak the effect of occlusion?
     
    Last edited: Nov 8, 2017
  14. michaelberg

    michaelberg

    Unity Technologies

    Joined:
    Jan 11, 2017
    Posts:
    5
    Native audio plugins only work as mixer effects (other than spatializers and ambisonic decoders), so these will not work well with the current Resonance spatialization design, unless you want to apply a native audio effect to the mix of all spatialized sounds. At the AudioSource level, your options are to use Unity's built-in audio effects or to write/use a C# component that implements OnAudioFilterRead.
     
  15. michaelberg

    michaelberg

    Unity Technologies

    Joined:
    Jan 11, 2017
    Posts:
    5
    Here are some partial answers:
    1. I don't know. Sorry! I'm really interested in how these transitions are handled too.
    2. Yes, the bulk of the processing is off the main thread. I believe the occlusion ray cast, if enabled, is the main thing you should monitor that will add CPU cost to the main thread.
    3. Max distance should have an effect. Do you have spatial blend set to 1.0 (3d)?
    4. Distance should have an effect, if things are working properly.
    5. I think you should get some of this effect with the early reflections/reverb work that is going on in these plugins, if enabled, but if you want to experiment with adding a low-pass filter, etc., that sounds like a good thing to try. With components that are available in the Resonance Audio SDK for Unity, you can also turn off different parts of the algorithm if you want to do something completely custom.
    6. I don't think you can tweak the occlusion effect. My understanding is the plugin is expecting that occlusion value to be 0.0 or 1.0 and it works its magic based on that information alone. I don't think you can pass down 0.5, for example, to make the effect more subtle, but I could be wrong.
     
    laurentlavigne likes this.
  16. alperg

    alperg

    Official Google Employee

    Joined:
    Jun 17, 2015
    Posts:
    3
    Just to add on to some of the points Michael answered:
    3) When distance attenuation curve is selected as logarithmic, Unity does not zero out the distance attenuation at max distance, but rather stops attenuating sound at that point. That may possibly be what you are experiencing?
    6) It is actually possible to tweak the occlusion value beyond 1.0, and also to use fractional values such as 0.5 as desired - by modifying the ResonanceAudio.ComputeOcclusion method accordingly. The resulting value will be used as an input value continuously mapped to the occlusion filter.
     
    laurentlavigne likes this.
  17. mtytel

    mtytel

    Joined:
    Feb 28, 2014
    Posts:
    68
    There's actually a demo in the native audio plugin examples (speakerrouting) that routes audio from a native audio plugin running on the mixer *into* an Audio Source using OnAudioFilterRead. Then the spatialization can run on that stream and route it to another Audio Mixer Group.
    I had initially thought this wouldn't work because there's a bug in Unity where results from OnAudioFilter read aren't panned left and right with the default spatialization. But, if you use a spatialization plugin instead it works fine.

    @slufter - FYI - adding this routing into Audio Helm in the next version.
     
    slufter and r618 like this.
  18. laurentlavigne

    laurentlavigne

    Joined:
    Aug 16, 2012
    Posts:
    1,873
    57% audio cpu
    19% unity cpu
    so total Audio CPU isn't the hardware's cpu, anyone knows what it is?

    Screen Shot 11-11-17 at 08.21 PM.PNG
     
  19. yero1209

    yero1209

    Joined:
    Sep 7, 2016
    Posts:
    3
    Just to answer the transition question (1).

    You are right, there is no positional-interpolation between overlapping probes. A switch of probes happens immediately when crossing boundaries of probes. See https://developers.google.com/reson...guide#understanding_overlapping_reverb_probes

    However, in order to avoid artifacts, we let the reverb tail of the last applied probe "run its course" or die out. In that sense, yes, you may say there is a temporal transition.
     
  20. laurentlavigne

    laurentlavigne

    Joined:
    Aug 16, 2012
    Posts:
    1,873
    It's a strange choice and doesn't sound very good. Is it a design choice or a first release limitation?
     
  21. yero1209

    yero1209

    Joined:
    Sep 7, 2016
    Posts:
    3
    Thanks for the feedback! It was a choice mainly to be consistent with how Audio Rooms handle transitions, so as to not surprise old users when they start using Reverb Probes together with Audio Rooms. But definitely we can look into the transitions for both and potentially change them so they are consistent AND better. Could you create an issue on https://github.com/resonance-audio/resonance-audio-unity-sdk/issues and label it as a "feature request"?

    Thanks again.
     
    laurentlavigne likes this.
  22. laurentlavigne

    laurentlavigne

    Joined:
    Aug 16, 2012
    Posts:
    1,873
    https://github.com/resonance-audio/resonance-audio-unity-sdk/issues/6
    What is audio room?
     
  23. Peter77

    Peter77

    Joined:
    Jun 12, 2013
    Posts:
    2,356
  24. casio_2000

    casio_2000

    Joined:
    May 26, 2011
    Posts:
    49
    Hi,
    Is it possible to sync an ambisonic audio file to a video.
    With the FB360 plugin for the Gear VR, we can sync the TBE to the millisecond of a video, so its always sync.

    Is there anyway to do the same thing with the resonance audiosourcefield ?

    Thanks.
     
  25. yero1209

    yero1209

    Joined:
    Sep 7, 2016
    Posts:
    3
    Hi,

    About the occlusion effect, first of all, the acoustic materials we provide are for reflectivity only, and not for transmission. Basically for an input energy, we only model how much is reflected back. The part that is not reflected could be either absorbed or transmitted (passing through the object), and we don't yet have a distinction between the latter two. In the future we might consider providing this feature.

    But before that, you can take a reference of our ResonanceAudio.ComputeOcclusion() and do something similar to compute an occlusion value (which is a positive floating point, with 0 meaning no occlusion and higher values meaning more occlusion) that better suits your need.

    Thanks!
     
  26. IRALTA

    IRALTA

    Joined:
    Apr 7, 2015
    Posts:
    12
    Do you know how use .tbe with Google resonance?
     
  27. emauskopf

    emauskopf

    Official Google Employee

    Joined:
    Jul 2, 2017
    Posts:
    2
    Resonance Audio works for traditional 3rd person games (non VR) on mobile/desktop platforms, using stereo output via headphones. However, it does not currently support consoles because that relies on different CPU architectures and would require channel-based audio output for the majority of users, who don't wear headphones. If RA were to support consoles, which RA features and console platforms would you find most relevant?
     
    Alverik likes this.
  28. emauskopf

    emauskopf

    Official Google Employee

    Joined:
    Jul 2, 2017
    Posts:
    2
    It is indeed possible to sync the Ambisonic Soundfield recordings made by Resonance Audio from within Unity with video playback in Unity. If you know the timestamp of the video playing back, you can check that against the 'time' field on the Unity AudioSource which is playing the ambisonic soundfield, and reset it if it is too far out of sync.

    For example:

    if (Mathf.Abs(videoPlayer.timeStamp - audioSource.time) > syncThreshold)
    audioSource.time = videoPlayer.timeStamp;

    of course, this will cause a pop in the audio, so, syncThreshold should be set to a high number so it only really happens if there was a buffering problem with the video (and when the video first finishes buffering).
     
    Alverik likes this.
  29. hippocoder

    hippocoder

    Digital Ape Moderator

    Joined:
    Apr 11, 2010
    Posts:
    22,231
    Hello! I would like to support PS4 and Xbox One. Switch would be nice but as it's quite portable, so no loss.

    Am I wrong to think this would enhance ordinary non VR experiences? - perhaps I am baking up the wrong tree as I didn't realise headphones would be mandatory, sorry :)
     
    Alverik and laurentlavigne like this.
  30. laurentlavigne

    laurentlavigne

    Joined:
    Aug 16, 2012
    Posts:
    1,873
    You're not wrong, even with speakers on each side of my monitor I notice a much improved positioning, it's even helpful in an RTS type camera
     
  31. laurentlavigne

    laurentlavigne

    Joined:
    Aug 16, 2012
    Posts:
    1,873
  32. agareau

    agareau

    Joined:
    Aug 24, 2017
    Posts:
    1
    I wonder, if that possible that the soundfield (or entire sdk) does not work in Unity 2017.3?
    Actually, on iOs, I had to update the pods manually via the terminal to make the sdk working,
    but on Android no luck so far.

    Using:
    Unity 2017.3.0f3
    Resonance Audio SDK for Unity v1.1.1


    I get those errors in Android Device Monitor:
    Unable to find libaudiopluginresonanceaudio

    Audio effect Resonance Audio Renderer could not be found.
    Check that the project contains the correct native audio plugin
    libraries and that the importer settings are set up correctly.


    In Unity 2017.2, everythings worked perfectly!

    Any idea?
     
    bradlaronde likes this.
  33. Fairennuff

    Fairennuff

    Joined:
    Aug 19, 2016
    Posts:
    84
    Are Unity Mixer Groups supported on the default ResonanceAudioMixer?

    I have a few groups setup on it, and sliders to adjust it down, pretty standard issue volume controls for music/sfx/master, etc. The only one that seems to have any effect is the Master group.

    I have a fire that is playing resonanceaudiosource with the audiosource component output set to the SFX group. Turn down the SFX group and it does nothing (despite seeing the levels change in the editor). But gets quieter when I turn down the master volume group.
     
    Last edited: Jan 20, 2018
  34. bradlaronde

    bradlaronde

    Joined:
    Dec 22, 2016
    Posts:
    1
    Same thing.
     
  35. SeaSand

    SeaSand

    Joined:
    Jan 30, 2016
    Posts:
    4
    I’ve tried to use Mixer Groups as well. Yes, only the Master Group (=the Group with the spatializer renderer effect) seems to change audio volume. I guess this is due to the way Resonance Audio handels spatialization. According to this blogpost (https://blogs.unity3d.com/2017/11/0...io-high-fidelity-sound-across-mobile-desktop/) it converts all clips into ambisonic and then spatializes all of them only once - for efficiency reasons. I imagine that only after using the spatializer renderer effect in the respective Mixer Group, the clips leave the Resonance Audio pipeline and can be handled the usual way in the Mixer.
    You could try using multiple Groups each with a spatializer renderer effect and mix them. I haven’t tried this yet. Even if it works it might be less CPU efficient...
     
  36. SeaSand

    SeaSand

    Joined:
    Jan 30, 2016
    Posts:
    4
    1) According to the documentation (and the prefab in the unity package) the Spatialize checkbox must be disabled in the AudioSource component when adding an Ambisonic soundfield. Tried this with an Ambisonic clip. Only worked when the checkbox was ENabled. Otherwise everything is silent. Or am I missing some information?
    EDIT: Sorry, I was wrong. After disabling and reenabling the Ambisonic checkbox in the AudioClip import settings (clip recorded with the Soundfield Recorder of the Resonance Audio Listener component) the soundfield works as expected with the Spatialize checkbox disabled in the AudioSource component.

    2) In the reverb baking demo scene: If I put a spatialized AudioSource somewhere in the church, the reverb is applied as long as the Listener is inside the building. When leaving the church and waiting at the door the sound from inside the church is audible, but has no reverb, which is not the natural experience. I’ve read the paragraph about overlapping probes and don’t see a way to apply the reverb to sources inside a probe when the Listener is outside but near the respective probe. Any ideas (except putting an extra effect on the AudioSource)?
     
    Last edited: Feb 16, 2018
  37. verron

    verron

    Joined:
    Dec 6, 2012
    Posts:
    1
    Hi,
    Resonance works fine in a Unity project, however it does not seem to work when exporting the Unity project as a webGL application. Is this supported ?

    This is the error message in the Firefox console :
    Audio effect Resonance Audio Renderer could not be found. Check that the project contains the correct native audio plugin libraries and that the importer settings are set up correctly.
    Audio source is playing an ambisonic audio clip, but an ambisonic decoder could not be created successfully. Make sure an ambisonic decoder is selected in the audio settings.


    Thanks
     
    Last edited: Mar 28, 2018
  38. ceebeee

    ceebeee

    Joined:
    Mar 7, 2017
    Posts:
    138
    pretty sure it's only for desktop applications.
     
  39. ceebeee

    ceebeee

    Joined:
    Mar 7, 2017
    Posts:
    138
    So I understand that now Resonance Audio is built into Unity 2018. However I'm a bit confused how to use this built-in version. I see you can choose Resonance Audio as the Spatializer and Ambisonic Decoder, but None of the Components seem to exist.

    Do you still need to import the older SDK to get access to the components, or how do you go about this? or is this a stripped down version that works through Unity's standard components?
     
    noemis likes this.
  40. id0

    id0

    Joined:
    Nov 23, 2012
    Posts:
    115
    I have noticed that if scene has resonance audio source, but it doesn't play (not needed right now), it breaks a stereo effect on all sounds.
     
  41. yankow

    yankow

    Joined:
    Apr 10, 2018
    Posts:
    1
    Hello, for some reason on Mac, although everything works fine inside Unity, I lose both Resonance Audio Room and Resonance Audio Reverb Probes when exporting as an app. This is highly frustrating, I also tried with the demos and it was the same, with both baked and/or not baked reverbs. I haven't found anyone complaining about it yet though, so I was wondering if anyone else could make it work...? Thanks in advance!
     
  42. Guneriboi

    Guneriboi

    Joined:
    Nov 25, 2010
    Posts:
    6
    Hey,

    If anyone can help me, please.

    I installed Resonance Audio yesterday and since then every time I click on a component with Resonance Audio Source my terrain goes black and I get this giant crosshair that mirrors the Listener and Source Directivity icons from the Resonance Audio Source component.

    I'm sure there is something obvious I'm missing but at the moment it's not obvious. Any help would be appreciated.

    looks like this:


    Thanks in Advance
    Nathan