Search Unity

Audio How to avoid flanging effect from simultaneous sounds in splitscreen multiplayer?

Discussion in 'Audio & Video' started by Xarbrough, Nov 23, 2019.

  1. Xarbrough

    Xarbrough

    Joined:
    Dec 11, 2014
    Posts:
    1,188
    I’ve started implementing sounds for our local multiplayer splitscreen game. At the moment I have an AudioListener in the center of the scene and a fixed-position AudioListener for each player. All sources are 2D and I map some of the sounds to L/R stereo pan depending on the splitscreen position. This gives a nice effect when players are using special sounds and they all overlap at different times.

    Now the issue: when two players trigger the same sound at almost the same time, the sound becomes mich louder and/or sounds like a laser (flanger/chorus like effect). I know why this is happening, but what are the best solutions to fix this?

    Ideas:
    • Randomize pitch, volume and panning or even force sounds to always be different in hopes of reducing the flanging.
    • Force different sound clips that are not too similar.
    • Merge multiple sound events into a single one. But what’s the best way? Per frame, over a series of frames? If the merge duration is longer than e.g 50ms all sounds have a delay, but the flanging effect may not even be avoided completely.
    Does anyone have experiences with this sort of issue?
     
  2. sdochertymusic

    sdochertymusic

    Joined:
    Aug 13, 2019
    Posts:
    8
    Randomising the volume won't help because the sounds will still be out of phase ... albeit the effect will be reduced.

    I would simply have a lot more sounds ... like 10 or more similar clips that you can randomly play instead of calling the same clip every time ... also a small amount of pitch randomisation +- a few percent will help too.
     
    Xarbrough likes this.
  3. Xarbrough

    Xarbrough

    Joined:
    Dec 11, 2014
    Posts:
    1,188
    After testing a few things together with our sound designer, we settled on a solution in which I queue all SFX-Requests during Update, then remove duplicate entries and play the remaining sounds in the Queue in LateUpdate. This, according to our sound designer, is easier for him to manage, since he doesn't have to deal so much with loudness spikes or over-crowded sounds.

    So, for our special game use-case, it works well to only merge sounds within a single frame, because these are usually triggered exactly at the same time. Additionally, I some 2D spatializing by simply mapping the y-position of a source to the stereo pan range. If multiple sounds are queued, they calculate the position average.
     
  4. sdochertymusic

    sdochertymusic

    Joined:
    Aug 13, 2019
    Posts:
    8
    Sounds like a nice solution ... but your Sound Designer should also think about dynamic mixing, compression and ducking of sounds too in order to make a game non-linear game sound mix. It's fairly standard to have priorities for sounds or types of sounds and have those take priority, often ducking down the volume of lesser sounds. This can be as simple as having separate Audio Mixers in Unity for each sound type (no scripting needed) or a scripted sound management system is possible too!