Search Unity

  1. Welcome to the Unity Forums! Please take the time to read our Code of Conduct to familiarize yourself with the forum rules and how to post constructively.
  2. Dismiss Notice

[Released] Dissonance: Unity Voice Chat

Discussion in 'Assets and Asset Store' started by Nyarlathothep, Oct 27, 2016.

  1. Nyarlathothep

    Nyarlathothep

    Joined:
    Jun 13, 2013
    Posts:
    389
    We haven't released a version for 2019.4, since it's going to be end-of-life very soon, but it should work just fine in 2019.4 :)
     
  2. MonkeyPuzzle

    MonkeyPuzzle

    Joined:
    Jan 17, 2016
    Posts:
    117
    Any way to get a breakdown of dissonance with pun2 vs. photon voice and pun2?

    I have tried everything with photon voice, unity mic, photon mic, webRTC audio DSP, AEC, AGC, mic amplifier, but I still get echo and the mic is so quiet. If I use the mic amp it helps but blows the sound out if it is loud, introduces noise.

    I am using a standard headphones with mic, works fine with discord voice chat.

    If dissonance improves on these things it could be a good fit for my project.
     
    Last edited: Sep 13, 2022
    Nyarlathothep likes this.
  3. Nyarlathothep

    Nyarlathothep

    Joined:
    Jun 13, 2013
    Posts:
    389
    I don't know much about Photon Voice, so I'm not sure how much of a direct comparison I can make.

    I can say that Dissonance supports a number of features designed to tackle your exact problem - echo cancellation (reduces feedback from speakers to mic), gain control (automatically amplifies all mic signals to approximately the same level), noise suppression (reduces constant noises like fans or electrical fuzz introdced by extreme amplification) and background sound removal (removes non-constant background sounds such as keyboard clatter or distant voices).

    I often get told that my mic is too quiet in Discord but in testing on Dissonance that's never a problem, so anecdotally it does solve that issue for me!
     
    MonkeyPuzzle likes this.
  4. mgear

    mgear

    Joined:
    Aug 3, 2010
    Posts:
    8,991
    not sure what went wrong, but getting this error when connecting client (and enabling the dissonance gameobject at that point), using latest Mirror from github repo source and have done this setup:
    https://placeholder-software.co.uk/dissonance/docs/Basics/Quick-Start-Mirror.html

    upload_2022-12-2_16-40-7.png

    Code (CSharp):
    1. NullReferenceException: Object reference not set to an instance of an object
    2. Mirror.TelepathyTransport.<CreateClient>b__18_2 () (at Assets/Mirror/Transports/Telepathy/TelepathyTransport.cs:97)
    3. Telepathy.Client.Tick (System.Int32 processLimit, System.Func`1[TResult] checkEnabled) (at Assets/Mirror/Transports/Telepathy/Telepathy/Client.cs:345)
    standalone server prints:
    Code (CSharp):
    1. Unknown message id: 8290 for connection: connection(1). This can happen if no handler was registered for this message. NetworkServer: failed to unpack and invoke message. Disconnecting 1.
     
    Nyarlathothep likes this.
  5. Nyarlathothep

    Nyarlathothep

    Joined:
    Jun 13, 2013
    Posts:
    389
    With Mirror Dissonance registers a "message handler" for a certain type (DissonanceNetworkMessage). Mirror considers it an error to receive a message with no registered handler (which is the error you're getting), to handle this when not in a voip session Dissonance adds a handler which just discards the message and when you enter a session Dissonance replaces that with a handler that processes the message properly. If your server is printing that error it means that Dissonance has not properly been initialised on the server.

    Do you have DissonanceComms+MirrorIgnoranceCommsNetwork setup on the server as well?
     
  6. mgear

    mgear

    Joined:
    Aug 3, 2010
    Posts:
    8,991
    Aha! **user-error**, i did not activate dissonance-go on serverside, only on the clients..
    It works now - thanks!

    btw. at the start, it does some init or begin recording, this part below,
    it causes quite a big freeze (although its only few seconds, but just wondering if thats normal).
    upload_2022-12-2_17-31-25.png
     
    Nyarlathothep likes this.
  7. Nyarlathothep

    Nyarlathothep

    Joined:
    Jun 13, 2013
    Posts:
    389
    Unfortunately a bit of a stutter when activating recording is normal - by default Dissonance uses the Unity Microphone API which is very slow to initialise (and cannot be accessed from another thread, so we can't offload the work from the main thread).

    However it looks like you're suffering from 1-2 second pauses, which is quite a bit higher than I'd expect for these kind of mic initialisation stutters. That could be caused by any number of reasons (misbehaving hardware, bad drivers, misconfigured audio settings).

    Using our alternative recording package which replaces the Unity Microphone system with the FMOD one might work around the issue.
     
    mgear likes this.
  8. mgear

    mgear

    Joined:
    Aug 3, 2010
    Posts:
    8,991
    One more question,
    was browsing the docs, but didn't see much scripting examples, for example:
    Whats the good way to access dissonance settings from script,
    is there some central singleton or so or should just take reference to needed component(s)?

    For example,
    if i want to change voice trigger mode from script.

    And can i set Push2Talk key, using new input system?
    Or call push on/off from script manually?
     
    Last edited: Dec 22, 2022
    Nyarlathothep likes this.
  9. Nyarlathothep

    Nyarlathothep

    Joined:
    Jun 13, 2013
    Posts:
    389
    The global settings (e.g. voice encoding quality etc) can be accessed through
    Dissonance.Config.VoiceSettings.Instance
    . These are saved to
    PlayerPrefs
    so they're persistent as well. You generally don't need to change these on the fly, but you'd use this in a settings menu for example.

    Session settings (e.g. self mute/deafen, tokens, open channels, start/stop talking events) are accessed through the DissonanceComms component. There should always be one of those in a scene with an active voice session.

    This is done on the VoiceBroadcastTrigger because you can have multiple triggers for different things. For example team chat might be voice activated, but global chat is push-to-talk. So for example to change you'd do something like this:

    Code (CSharp):
    1. var trigger = GetComponent<VoiceBroadcastTrigger>();
    2. trigger.Mode = CommActivationMode.VoiceActivation;
    We haven't integrated with the new Unity input system yet (that's something we hope to look into in the new year). However there are a two ways to integrate with it yourself that are fairly easy.

    First of all you can have another script which simply sets
    Mode = CommActivationMode.Open
    when you want to transmit and
    Mode = CommActivationMode.None
    when you do not want to transmit (we added those two modes specifically for this kind of use-case).

    Alternatively you can create your own trigger script which inherits from
    VoiceBroadcastTrigger
    and overrides
    IsUserActivated
    (return true when you want tot ransmit).
     
    mgear likes this.
  10. Nyarlathothep

    Nyarlathothep

    Joined:
    Jun 13, 2013
    Posts:
    389
    Dissonance never sends your own voice to yourself. if you're hearing yourself it must be due to feedback - the remote microphone is picking up your voice playing through the speakers and sending it back to you. Mobile phones are particularly vulnerable to this, since the microphone and the speaker are very close together.

    You can check this by looking at the DissonanceComms inspector - when you start talking you'll probably see someone else marked as "speaking" in the inspector.

    If you change the value at runtime (i.e. in play mode) then the value will be saved in PlayerPrefs. If you change it any other time it should be saved (and that value will be used as the default in app builds).

    If you've ever set it to Earpiece on the test device then the PlayerPrefs on that device will be overwriting it back to Earpiece. You can call VoiceSettings.Reset() on device to clear all preferences and use the defaults.

    You've got all of the systems turned on to maximum here. You've probably done that while trying to fix echo, but that's going to make things more difficult to test since you have several systems all running at once!

    There are five things you can tweak here...

    Voice Detector Sensitivity

    Turning the sensitivity down will reduce feedback slightly if the feedback does not sound like a voice (e.g. if the other systems have removed 90% of the feedback, this might remove the last 10%). Setting it to Low Sensitivity is fine, but I wouldn't expect it to do much on it's own.

    Noise Suppression
    This is intended to remove noise, such as fans. It will do almost nothing to remove feedback (it's designed not to remove voice, after all that would normally be very bad!). Setting it to Very High is fine, but again I wouldn't expect it to do much to help.

    Background Sound Removal
    This is an ML powered system that has been trained on speech with a lot of background noise and will attempt to remove it. There are two big differences between this and Noise Suppression:
    1. Noise Suppression is designed to remove pure noise (e.g. white noise, such as fans) and will not be very effective at other undesirable fans (such as feedback). Background Sound Removal will attempt remove whatever the neural network considers to be undesirable background sounds, so BSR is a much more flexible/powerful system.
    2. Noise Suppression is designed to be very "cautious" - it will remove noise but even at max settings in a very noisy environment it will almost never distort the voice, even if it means that some noise is left. Background Sound Removal is the opposite, at max settings it will try to remove the background sound even if it means distorting the voice beyond recognition!
    Due to #2 you almost never want to use BSR at maximum settings, it's better to let a bit of noise through and reduce the distortion. BSR can also reduce reduce the effectiveness of echo cancellation - at max settings it could prevent echo cancellation from working at all!

    You should keep the BSR slider at a much more conservative value (e.g. 75%) and increase it later, making sure that the audio quality is acceptable as you increase it. You should disable it completely while setting up Echo cancellation.

    Acoustic Echo Cancellation
    Finally we get to AEC, this is designed to detect and remove feedback so it is intended to fix exactly the problem you have. From your images it looks like you have set it up correctly. If you run your application in the editor and watch the inspector for the cancellation filter, does it initialise properly after 5-10 seconds of talking?

    Audio Ducking
    This is the final thing you can tweak. It is a very simple system which reduces the volume of all voice playback while you are actively transmitting. This isn't a complete solution for feedback, but it can help. On a mobile you can often set this very low (the default setting in Dissonance for audio ducking is almost imperceptible).
     
  11. Nyarlathothep

    Nyarlathothep

    Joined:
    Jun 13, 2013
    Posts:
    389
    When the AEC is "initialising" it means that it's doing nothing at all, because it's still setting up AEC. This stage should only take 5-10 seconds, but it is quite variable (and can fail altogether, if there is something wrong). If this is always showing "initialising" then it's the root of your echo problem.

    It's a very common error when developing with AEC to have the two devices able to "hear" each other (e.g. ipad mic can record iphone speaker output). Microphones are much more sensitive than you think - you'll need at least two closed doors between the devices if you're testing locally. Of course usually in the real world this isn't a problem!

    I only mention this because if the devices can "hear" each other they will be stuck in "Initialising..." forever.

    Could you send me the complete log please (martin@placeholder-software.co.uk), I'll have a look at the errors.
     
  12. Nyarlathothep

    Nyarlathothep

    Joined:
    Jun 13, 2013
    Posts:
    389
    From what I can see there aren't any errors in that log from Dissonance.

    These are different levels of mobile echo cancellation strength. You can just think of them as:
    • Disabled
    • Very Low
    • Low
    • Medium
    • High
    • Very High
    Setting AEC too high or too low reduces performance, so they're named like that to give a bit of a hint which level you should use.
     
  13. mgear

    mgear

    Joined:
    Aug 3, 2010
    Posts:
    8,991
    I'm trying to add text chat and for some reason no messages are received..
    I think in the demo scene i did get them working sometime earlier.

    just wondering how to debug where the message gets stuck?

    this part runs in ChatInputController.cs:
    Code (CSharp):
    1. Comms.Text.Send(_targetChannel, message);
    this part runs in TextChat.cs:
    Code (CSharp):
    1. net.SendText(message, ChannelType.Room, roomName);
    But after that not sure where the message ends up..
    (sending in global or A,B channels no difference)

    *also in the demo scene, can see that text data is transferred, values change,
    on my scene it doesnt change.. so its not going out?
    upload_2023-4-26_11-26-16.png
     
    Last edited: Apr 26, 2023
    Nyarlathothep likes this.
  14. Nyarlathothep

    Nyarlathothep

    Joined:
    Jun 13, 2013
    Posts:
    389
    Do you have a VoiceReceiptTrigger for the channel you're sending to?

    Text chat data is filtered with the same logic as voice chat data - i.e. you need a VoiceReceiptTrigger to receive those messages. It's a little unintuitive that it applies to text!
     
  15. mgear

    mgear

    Joined:
    Aug 3, 2010
    Posts:
    8,991
    Aha, that was it! Thanks.
     
    Nyarlathothep likes this.
  16. DavidM27300

    DavidM27300

    Joined:
    Sep 20, 2022
    Posts:
    6
    Hi !
    I'm facing an issue I do have an AR app that run on Hololens, we do use voice chat (it worked perfectly) but recently I wanted to have the tracking feature enabled so we have a spacialized chat. So I did add the photon player script to all my avatar prefab, and then I add a voice broadcast trigger with Channel type set to "Self". Then I made sure that all my were using "positional data". But once I load the scene I do have the following error :
    Code (CSharp):
    1.  VoiceBroadcastTrigger: Error: Cannot find DissonanceComms component in scene! This is likely caused by "Created a Dissonance trigger component without putting a DissonanceComms component into the scene first"
    Here is my DissonanceSetup gameObject :
    upload_2023-5-4_11-53-30.png

    And here is one of my avatar prefab config :
    upload_2023-5-4_11-54-45.png

    Is there anything I'm missing ?
    Can you help me on this issue ? Thank you !
     
  17. Nyarlathothep

    Nyarlathothep

    Joined:
    Jun 13, 2013
    Posts:
    389
    Based on that warning I would guess that your player is being spawned by some process before the DissonanceComms object has been created. Since there's a VoiceBroadcastTrigger attached to the player you get that warning.

    You probably don't need the VBT attached to your player. Since the trigger on the main object is set to use positional data the one attached to the player is redundant. You only really need the player script attached, to mark this gameObject as a player and tell Dissonance where that player is.
     
    DavidM27300 likes this.
  18. DavidM27300

    DavidM27300

    Joined:
    Sep 20, 2022
    Posts:
    6
    Thank you for that answer !
    It might be the problem, is there any proper way to load the DissonanceCommsObject first ? Or wait for it to load the player into the scene ?
     
  19. Nyarlathothep

    Nyarlathothep

    Joined:
    Jun 13, 2013
    Posts:
    389
    If the DissonanceComms prefab is in the scene in edit mode then I would expect it to be there before any players are spawned without any further changes. How are you adding DissonanceComms to the scene at the moment?

    That said, I think the VBT script should actually handle this error gracefully. If it cannot find a DissonanceComms instance it will simply do nothing and should try again next frame.
     
  20. DavidM27300

    DavidM27300

    Joined:
    Sep 20, 2022
    Posts:
    6
    I think it is because I load a player on a scene 1 to choose the avatar style, then I load the scene 2 with the player and the correct avatar applied. The dissonance comms exist only on the scene 2 but the default player probably use the dissonance player script already... I believe it is the why the dissonance comms does not exist at this time...
    I'm not sure to be totally clear...
     
  21. Nyarlathothep

    Nyarlathothep

    Joined:
    Jun 13, 2013
    Posts:
    389
    Yeah if you're loading into a scene without the DissonanceComms component then the other Dissonance components definitely won't be happy! The player tracker will be fine (it just runs a co-routine that keeps trying to find the DC component) but the triggers will log this warning.

    I think your best approach would be to remove the triggers from the player, if possible. If you're using the new GridProximity system there's no need for any triggers attached to the player - you only need them on the player if you're using collider activation.

    If you can't remove them from the player then I think your second best option is to have a script which monitors the player tracker script and once it's tracking enable the trigger components then. Hopefully that should avoid the error in the initial scene.
     
    DavidM27300 likes this.
  22. alexr1221

    alexr1221

    Joined:
    Mar 5, 2016
    Posts:
    61
    Hi, is there a way to avoid displaying the mic icon in Windows (see bellow) when we don't take part to the conversation with a muted local player but we still want to listen to others.
    I'm using photon integration.

    upload_2023-6-12_17-12-54.png
     
    Nyarlathothep likes this.
  23. Nyarlathothep

    Nyarlathothep

    Joined:
    Jun 13, 2013
    Posts:
    389
    By default Dissonance will always run the mic in the background because starting/stopping recording is quite an expensive process, so it does it once at startup and then just discards the audio if it's not needed (e.g. if you're muted). If the mic is running then Windows will show that icon.

    To work around this I think your best option is to create a custom microphone capture script (implement IMicrophoneCapture and put it next to DissonanceComms). When you want to record as normal just pass all of the calls through to the BasicMicrophoneCapture script. When you don't want to record simply return `null` when `StartCapture` is called (Dissonance will interpret this as a failure to start the mic and will fall back into a receive-only mode).

    You can call DissonanceComms.ResetMicrophoneCapture() if you want to change.
     
    olavrv and alexr1221 like this.
  24. alexr1221

    alexr1221

    Joined:
    Mar 5, 2016
    Posts:
    61
    I modified the BasicMicrophoneCapture script accordingly and it worked fine. Thanks for that !
     
    Nyarlathothep likes this.
  25. temroi

    temroi

    Joined:
    Apr 23, 2019
    Posts:
    1
    Does anyone know it by using Dissonance, we can make Photon Voice work on Hololens 2? We got quite surprised that Voice 2 on Hololens is an Enterprise Circle feature.
     
    Nyarlathothep likes this.
  26. Nyarlathothep

    Nyarlathothep

    Joined:
    Jun 13, 2013
    Posts:
    389
    Dissonance should work on Hololens2 and with PUN/Bolt/Fusion networking (note that Dissonance replaces Photon Voice though).
     
  27. olavrv

    olavrv

    Joined:
    May 26, 2015
    Posts:
    502
    @Nyarlathothep We LOVE dissonance, but have one question / issue we need to resolve. When we use voice activation, there is a loud "pop" when microphones are "activated". Is it possible to fix this? Thanks!
     
    Nyarlathothep likes this.
  28. Nyarlathothep

    Nyarlathothep

    Joined:
    Jun 13, 2013
    Posts:
    389
    That's odd, no one else has ever reported anything like that! Do you hear the same when you acitvate with other modes (e.g. push-to-talk)?
     
  29. Zeekerss

    Zeekerss

    Joined:
    Dec 19, 2016
    Posts:
    1
    Hello, I've found this asset basically perfect, but there is just one thing I haven't been able to do. What would be the best way to modify the pitch of a speaker's voice? I know you cannot do it by simply setting the audio source's pitch or through the Audio Mixer.

    I tried multiplying the sampleRate parameter in the "WaveFormat" constructor (in BasicMicrophoneCapture.cs) by 1.5 or so to get a higher pitched voice. This actually works perfectly (though you have to restart mic capture to change the speaker's pitch.) But when a high-pitched speaker makes noise for a longer amount of time, their voice eventually gets cut off and replaced by a subtle white noise for the other clients who are listening. When this speaker stops talking or making any more sound, they will become audible again. I do have ideas for why, but I'm not able to fix it.

    I only really need higher-pitched audio; lower pitched speakers just get delayed over time, which makes sense to me since their audio comes out slower.

    Audio engineering is way above me, so I know this may be a very complex issue, but it's also tantalizingly close to functioning just how I wanted. I wonder if I'm going about this the wrong way or if I'm on my own here. Thanks either way
     
    Nyarlathothep likes this.
  30. Nyarlathothep

    Nyarlathothep

    Joined:
    Jun 13, 2013
    Posts:
    389
    Unfortunately manipulating the sample rate like that won't do what you want - any change to sample rate will cause audio sync issues.

    As you said low pitch will fall behind because it's playing too slowly. High pitch can't get ahead (it can't read audio from the future!) so instead it will just be constantly triggering packet loss compensation and error correction systems.

    The white noise you're hearing is called "Comfort Noise" and it's one of the error correction systems. If just a packet or two is lost then the gap will be filled in with noise (with approximately the right frequency distribution), this is surprisingly hard for the human ear to detect (in small bursts).

    If you attach an audio filter to shift pitch (https://docs.unity3d.com/Manual/class-AudioPitchShifterEffect.html) after Dissonance has output it's audio that should do what you want. Setup a custom audio playback prefab with these components (in this order):
    • VoicePlayback
    • AudioSource
    • SamplePlaybackComponent
    • Audio Pitch Shifter
     
    Zeekerss and hopeful like this.
  31. alexr1221

    alexr1221

    Joined:
    Mar 5, 2016
    Posts:
    61
    Hi, despite looking into the code and the docs, I don't get how can we properly modify the volume value of the entire room. Is it like in the picture bellow?

    upload_2023-11-3_11-25-42.png
     
    Nyarlathothep likes this.
  32. Nyarlathothep

    Nyarlathothep

    Joined:
    Jun 13, 2013
    Posts:
    389
    Are you trying to modify outgoing volume for yourself in a room (i.e. the volume others will hear you at, if they hear you through that channel)? If so that's what the slider there does, see the docs here.
     
    alexr1221 likes this.
  33. alexr1221

    alexr1221

    Joined:
    Mar 5, 2016
    Posts:
    61
    I wanted to modify the incoming voices. A slider that would decrease or increase the volume of all the players at the same time.
     
  34. DieEtagen

    DieEtagen

    Joined:
    Jan 11, 2013
    Posts:
    7
    Hi,
    I'm facing an issue where voice transmitting is broken only with Meta Quest 2. Everything works fine on Quest 1, Quest 3 and Pico 4, but on Quest 2 we see a lot of packet loss, so we can not really hear the person. We tested on two different Quest 2 devices. Both with the same issue. It's weird that the same app with same settings works on all headset except the Quest 2. We did not code something like if(isQuest2Device){ // Do something to break audio }.

    We are using Unity Netcode For GameObjects, default VoiceSettings and Unity 2023.1. We tested on the same network with all devices. We also tried Vivox Voice Chat and indeed it works with Vivox also on Quest 2, no sound issues. But we can not use LipSync with Vivox.

    Any idea what could be wrong on the Quest 2? We already did factory reset and checked all system settings and could not find anything what could cause the sound issues.
     
  35. Nyarlathothep

    Nyarlathothep

    Joined:
    Jun 13, 2013
    Posts:
    389
    There's not a slider to modify incoming voices on a per-channel basis. The only controls for voice volume on the receiving end are:
    • DissonanceComms.RemoteVoiceVolume (changed _all_ incoming voices)
    • VoicePlayerState.Amplitude (change the volume for one specific player)
     
    alexr1221 likes this.
  36. Nyarlathothep

    Nyarlathothep

    Joined:
    Jun 13, 2013
    Posts:
    389
    That is odd, I know a lot of people are using the Q2 with Dissonance without any problems like that. In fact I even have a Quest2 for testing voice!

    When you say you're seeing high packet loss how are you measuring that? Is it using the packet loss monitor built into Dissonance?
     
  37. DieEtagen

    DieEtagen

    Joined:
    Jan 11, 2013
    Posts:
    7
    I downgraded the project to Unity 2022.1 and it works now on Quest 2. So it's probably some Unity 2023 bug. Although it makes no sense it's only an issue on Quest 2.
     
    Nyarlathothep likes this.
  38. Nyarlathothep

    Nyarlathothep

    Joined:
    Jun 13, 2013
    Posts:
    389
    Glad to hear you found a fix :)

    If you can share any details on what exactly you were seeing that would help me track it down in 2023 and fix it propery. Thanks.