Search Unity

AudioStream - {local|remote media} → AudioSource|AudioMixer → {outputs}

Discussion in 'Assets and Asset Store' started by r618, Jun 19, 2016.

  1. dawnr23

    dawnr23

    Joined:
    Dec 5, 2016
    Posts:
    41

    Hello.
    Thank you very much for your kind reply.

    I'm going to buy an asset.

    I'll tell you in more detail than I said earlier.

    The HoloLens 2 device will stream the sound source to the server in real time using a web socket.

    PCM 16-bit (2 bytes), 16kSp, and Mono data are sent in 2000 bytes.

    I'd appreciate it if you let me know if it's possible.
     
  2. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    hey there,
    so the assets work either with *compressed* audio:

    - you need either AudioStream components AudioStreamNetMQSource and AudioStreamNetMQClient, or AudioStreamNetCode asset and its components
    in all cases the audio is compressed as I mentioned so it won't work with any server - you need precisely these components to connect the machines.
    (AudioStreamNetCode is based on NetCode for GameObjects - it's similar to previous UNET)
    - these components incur some CPU cost

    or uncompressed PCM data can be pushed only to Icecast server - via IcecastSource in AudioStream
    you can see it in demo scene: IcecastSourceDemo in LAN section
    - I recommend downloading the demo and running the scene
    - the scene audio there can be pushed to an Icecast instance as PCM16 (you can't change 2 channels in the demo)
    - PCM is being pushed at audio rate, max Unity audio frame should be 1024 so within your 2000 (this depends on Project Settings->Audio->DSP Buffer Size project setting)

    I can't tell how well this might mis/behave on Hololens, without actually running it
     
  3. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    a small update submitted

    ===============================================================================
    3.0.1 Watermelon

    Updates/fixes:
    - renamed events on AudioSourceOutputDevice for future usage: OnRedirectStarted -> OnRoutingStarted, OnRedirectStopped -> OnRoutingStopped
    - few fixes w/ logging + strings/data marshaling for DSP plugins
    - main download handler: ContentLength 0 spam
    - improved AudioTextures source for analysis

    New:
    - added iOS background mode custom AppController (see Documentation060_mobiles.txt for more)

    non critical update overall - you don't have to update if iOS background audio is not needed
     
  4. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    tiny update submitted

    ===============================================================================
    3.0.2 Watermelon
    Updates/fixes:
    - AudioStreamDevicesChangedNotify: simplified and added (*missing*) processing of devices changed notification for *inputs*
    - error reporting via UnityEvent: fixed invocation to be on main thread only
    - demo application: added zoom/font size for onscreen text
     
  5. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    submitted substantial cleanup and improvement for downloading/offline cache functionality:
    ===============================================================================
    3.1 112o22 85k

    Updates/fixes:
    - AudioStream : added option to download to/playback from local disk cache directly to this main component
    : if Url begins with 'http', 'Download To Cache' and 'Play From Cache' options become available
    : see 'Documentation000_audio_streaming' for more
    : fixed/updated forward/rewind seeking in infinte streams when downloading (when 'Download To Cache' is On)

    - AudioStreamDownload : removed component and its demos
    : previous functionality of realtime downloads moved to AudioStream and can be seen in AudioStreamDemo - see above
    : for non realtime downloads there's new component AudioStreamRuntimeImport - see below

    - cache location/s : added 'Download Cache Path' - stores original downloaded media which can be played offline, defaults to Application.persistentDataPath
    : added 'Temporary Directory Path' - stores temporary audio PCM data, defaults to Application.temporaryCachePath
    : configurable at 'Support\Resources\AudioStreamRuntimeSettings'

    New:
    - AudioStreamRuntimeImport : imports (streamed) content into an AudioClip - previously done via AudioStreamDownload with 'Real Time Decoding' option Off
    : importing should also be faster than previously
    : also see 'Documentation000_audio_streaming' for more
    - AudioStreamRuntimeImportDemo
    - AudioStreamRuntimeImportStressTest: use the above, similar to now removed AudioStreamDownloadDemo and AudioStreamDownloadStressTest

    Updated out of date documentation where needed.
    ~
     
  6. vitiet

    vitiet

    Joined:
    Jul 8, 2019
    Posts:
    3
    Hello there, very nice asset you've developed here.

    However, I've run into a problem with the Resonance Demo, which crashed Unity every time I exited play mode.

    In a scene, I have multiple (5) ResonanceInput components each on a different game object scattered around the Main Camera running at the same time using different input from multiple real-time audio streams generated in a Max 8 patcher (just some sine waves sounds with different frequency) that were being streamed through 8 virtual audio cables (by VAC) and routed by VoiceMeeter Potato.

    There was no sound played at all if I enable more than 3 ResonanceInput components.
    One worked fine, but 2 rendered the previous enabled ResonanceInput component sound to become very grainy and staticky almost like a scratched record.

    Here is the ResonanceInput component on one of the game object, and I've made a simple script to select an input device by name and set the id in the ResonanceInput component, which should not be a problem.
    upload_2023-1-2_0-42-50.png upload_2023-1-2_0-43-51.png
    upload_2023-1-2_0-51-40.png

    And below is my full pipeline for sound streaming using Max 8 -> Asio4All -> 8 VACs -> VoiceMeeter Potato -> Unity
    upload_2023-1-2_0-49-6.png

    Can you kindly give me some advices as to what might be the problem that prevents multiple resonance audio input to work in a scene and crashes Unity?

    Regards,

    Vi Tiet.
     
  7. vitiet

    vitiet

    Joined:
    Jul 8, 2019
    Posts:
    3
  8. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    Hi, thanks for the crash log !
    I actually wanted to ask for it when I saw your message - there's currently nothing you can do on your part, but I think this should be fixable - will let you know
    & Thanks !
     
    vitiet likes this.
  9. vitiet

    vitiet

    Joined:
    Jul 8, 2019
    Posts:
    3
    Hey! Thanks for the quick reply! I'm looking forward to your solution!
     
  10. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    3.2 012o23 >100k
    Updates/fixes:
    - copyDemoScenesToBuildSettings: asset setting to en/disable populating build settings with its demo scenes when doing a build
    : toggle is present on 'Demo\_Support\Editor\AudioStreamDemoMenuDef' scriptable object
    - directory for downloading original streamed audio in Application.persistentDataPath is renamed from 'AS_DL_Cache' to -> 'AudioStream_DownloadCache'
    - Inputs components (AudioStreamInputBase, Resonance)
    : significantly enhanced startup to prevent glitches especially with 'recordOnStart' ON
    : there were outstanding quality issues - these were fixed
    : fixed recordGain vs. Resonance Gain where necessary (all inputs have direct recordGain)
    - more maintenance + improvements refactors for usage internally in other assets
    - used with FMOD 2.02.11
    - 3D Input related demo scenes: set Linear Volume Roloff for 3D AudioSources for their better playback audition
    - more user friendly DEMO UX + default values

    thanks to @vitiet testing I fixed few longstanding issues related to base Input components
    (multiple Resonances remain not completely resolved for now though!)
     
  11. TigerHix

    TigerHix

    Joined:
    Oct 20, 2015
    Posts:
    69
    Hello, this is a great asset, but I think there's a bug at AudioStreamBase.cs, line 1704. `PtrToStringAnsi` should be `PtrToStringUTF8`, or music files with UTF8 characters in tags cannot be loaded. :)
     
  12. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    Hi, thank you -
    I picked this string conversion because it was only one that worked reliably on Mono at the time - can you post/PM me the media example with UTF8 tags, I'll have a look
    Thanks !
     
  13. plan-systems

    plan-systems

    Joined:
    Mar 8, 2020
    Posts:
    44
    Hi friends, is there a sample script demonstrating how / where to add an FFT output filter (for visualization). Not using Unity's audio, so this is through FMOD...

    I try to use the example but "bus:/" can't be found:

    https://fmod.com/docs/2.02/unity/examples-spectrum-analysis.html

    The goal is to be able to play on iOS and android from a network stream, be able to play in the background (bypass Unity audio), and when the app is active, send FFT data to the music visualizer.
     
  14. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    hi this is not FMOD support forum (if i got your question right) and the asset doesn't use any FMOD Studio capabilities - the example you linked is for FMOD studio, so you need FMOD for Unity
    you can also add a DSP via Core API directly

    To alleviate confusion a bit: Unity itself does use a special version of FMOD internally for its audio, but that has nothing to do with FMOD and FMOD Unity package currently available directly from fmod.com - for all intents and purposes a direct access to it is as good as nonexistent
     
  15. TigerHix

    TigerHix

    Joined:
    Oct 20, 2015
    Posts:
    69
    You are correct, even if I replaced it with this method, it still throws the same error from time to time... for now, I will disable reading the tags.

    Worse, I found another issue that seems to only exist in builds: AudioStreamRuntimeImport sometimes does not import the full audio file. It sometimes only imports portions of it (say the first 200 seconds), and it happens very randomly. If I reload it, it may load correctly (or still load a partial file). I am listening to the OnAudioClipCreated event and assigning the AudioClip to an AudioSource when the event is fired.
     
  16. plan-systems

    plan-systems

    Joined:
    Mar 8, 2020
    Posts:
    44
    FMOD is a literal requirement to use AudioStream, so I'm not sure why fielding a question is an issue. Yes, this isn't an FMOD forum but this was a question about integrating with your code (not to mention that you are selling a product that is dependent on FMOD).

    The answer you could have provided that would have been helpful was that using `fmodsystem.system.getMasterChannelGroup()` was needed instead.

    Maybe consider being more supportive those who are new and who have paid for your work based on the assumption that the author will make an effort to be helpful (esp when they rely on some else's framework to even have a viable product).
     
    Last edited: Apr 4, 2023
  17. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    If it's possible to share problematic audio please do so (feel free to PM) - I suspect it most likely doesn't detect file ending properly -
    But if it's possible to share the file I'd like to verify this thanks !
     
  18. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    Only part of the FMOD for Unity is requirement for the asset - as it's described in docs - in length.
    Namely FMOD Core API is needed, everything else in FMOD Studio is
    - not needed
    - I don't have experience with since I'm not using FMOD Studio functionality - not in the asset itself, and neither practically otherwise
    - your example script is based on FMOD Studio
    This is something I had no idea about since "bus:/" is a Studio concept
    There are examples of adding a DSP to a channel directly via Core (or low level) API which is what you would use if not using your example script directly -- the samples you're looking for are present e.g. in FMOD Engine installation from https://fmod.com/download#fmodengine, and in ~~ FMOD SoundSystem\FMOD Studio API Windows\api\core\examples ~~ once they're installed
    Hope this answers your questions !
     
  19. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    btw also note that these FMOD provided samples are not based on C#, they're rather directly using their C++ API, but the mapping is generally 1:1 between the two
     
  20. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    ... and - to hopefully close this off - the asset itself contains full accessible implementation of using/adding a DSP
    - which the FMOD built-in FFT node is an example of - so it surely is possible to base your implementation on this
     
  21. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    One more thing: have you seen this happening in demo scene ('AudioStreamRuntimeImportDemo') (with your media as input) ?
    I think this should work for local (i.e. not streamed) media correctly
    - you might want to increase Starving Retry Count on the component if it downloads from network and also if you give it Unique Cache id (per your sound/media file) it will load the clip from previous run (so if that was correct it will at least not worsen it)
    Log with ~ .INFO log level would be also helpful
     
  22. TigerHix

    TigerHix

    Joined:
    Oct 20, 2015
    Posts:
    69
    Sure, here's one problematic music file:
    https://drive.google.com/file/d/1HM_gLMob2TXts2gAAcvvwp_HvjwQzA0G/view?usp=sharing

    However I don't think it's an issue with this specific music file, as I have seen it happens with other files as well. When this issue occurs, if I try to reload the audio file by:

    audioStream.url = url;
    audioStream.OnAudioClipCreated.RemoveAllListeners();
    audioStream.OnAudioClipCreated.AddListener((_, clip) => {
    myAudioSource.clip = clip;
    });
    audioStream.Play();

    It is able to load the full music file, maybe half of the time. If I keep retrying, it probably is going to load the full file within 10 retries, but the thing is I don't know the music file's original length (at least not trivially) within code.

    Also, I am not using online media. I am reading from the local file system (more specifically user-provided music files in the StreamingAssets folder), and I acquire the AudioClip on AudioClipCreated event.

    Unfortunately I don't have logs at the moment, will provide later when I repro it. Thanks for looking into it!
     
  23. TigerHix

    TigerHix

    Joined:
    Oct 20, 2015
    Posts:
    69
    Also, it seems like
    audioStream.continuosStreaming = true;

    will make the AudioStreamRuntimeImport script to reload a local URL forever, i.e. after the file is loaded, it will attempt to load it again in an endless loop.
     
    Last edited: Apr 5, 2023
  24. TigerHix

    TigerHix

    Joined:
    Oct 20, 2015
    Posts:
    69
  25. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    hi @TigerHix, thanks for the file !
    i just run a test which imports it in a loop and prints its length after import, but couldn't see it not importing the whole file, something like:
    Code (CSharp):
    1.     IEnumerator Start()
    2.     {
    3.         this.asri = GetComponent<AudioStreamRuntimeImport>();
    4.         for (; ; )
    5.         {
    6.             this.asri.url = url;
    7.             this.asri.OnAudioClipCreated.RemoveAllListeners();
    8.             this.asri.OnAudioClipCreated.AddListener((_, clip) =>
    9.             {
    10.                 Debug.Log(clip.length);
    11.             });
    12.  
    13.             if (this.asri.ready)
    14.                 if (!this.asri.isPlaying)
    15.                     this.asri.Play();
    16.  
    17.             yield return null;
    18.         }
    19.     }
    20.  
    (notice checking for 'ready' flag - that's something i forgot initially and there should be a warning added, and I enabled 'Overwrite Cached Download' so it always runs the whole process and doesn't just reread previous data)

    while it's in the middle of decoding and you stop the scene you can see it produces truncated clip - my guess is something's stopping your component (maybe elsewhere in the code and this causes the clip to be shorter - it always produces it on stop/shutdown)
    So I recommend checking for 'isPlaying'/'ready' before doing anything on the component, including e.g. stopping it
    let me know if this helps
     
    TigerHix likes this.
  26. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    yea this is expected, it would probably make sense if the flag was indeed ignored for importing a local file though
    for network streamed location i can't ignore it since the user might want to reconnect at all times
     
    TigerHix likes this.
  27. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    ah looks like something's off with clip format
    it sounds identical to source though hah
    thanks for the report !
     
    TigerHix likes this.
  28. TigerHix

    TigerHix

    Joined:
    Oct 20, 2015
    Posts:
    69
    This is weird, I am pretty sure no one is destroying or interacting with the component when the audio is being loaded, but I will add the ready flag check just to be safe.

    also, did you run your test in a standalone windows build? I did not experience the issue in the editor.
     
  29. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    please run this build - it's basically the above script which just displays the length of the clip after each import:
    https://www.dropbox.com/s/oepmwn9rggzk8zy/ASRITest.zip?dl=1
    complete scene w/script as package: https://www.dropbox.com/s/13tnhflnsk9cawv/ASRITest.unitypackage?dl=1

    Thanks !
     
    TigerHix likes this.
  30. TigerHix

    TigerHix

    Joined:
    Oct 20, 2015
    Posts:
    69
    Thank you! I will try to do some more testing on my end.

    I have (unfortunately) encountered another issue:


    Exception: channel.addDSP ERR_INVALID_PARAM - An invalid parameter was passed to this function.
    AudioStream.AudioStreamBase.ERRCHECK (FMOD.RESULT result, System.String customMessage, System.Boolean throwOnError) (at Assets/Packages/AudioStream/Scripts/AudioStream/AudioStreamBase.cs:2177)
    AudioStream.AudioStreamRuntimeImport.StreamStarting () (at Assets/Packages/AudioStream/Scripts/AudioStream/AudioStreamRuntimeImport.cs:105)
    AudioStream.AudioStreamBase+<StreamCR>d__107.MoveNext () (at Assets/Packages/AudioStream/Scripts/AudioStream/AudioStreamBase.cs:1419)
    UnityEngine.SetupCoroutine.InvokeMoveNext (System.Collections.IEnumerator enumerator, System.IntPtr returnValueAddress) (at <4014a86cbefb4944b2b6c9211c8fd2fc>:0)


    this seems to happen if I try to load an audio file on startup. I was not able to repro this if I load the same audio file again sometime later. What could be the possible reasons that lead to this?

    My DSP buffer size is the default one (in Project Settings -> Audio).
     
  31. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    you have to wait for the 'ready' flag as was mentioned
    if you do custom scripting try to base it on the demo scenes you'll run into fewer issues
     
    TigerHix likes this.
  32. TigerHix

    TigerHix

    Joined:
    Oct 20, 2015
    Posts:
    69
    Got it - so even if OnAudioClipCreated event is fired, I have to wait for the ready flag to become true. So should I just don't use OnAudioClipCreated event at all - I should just wait until the ready flag is true, OR if the OnError event is fired?
     
  33. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    you're safe to use any and all components events/actions but only _after_ the ready flag is set
    OnAudioClipCreated will never be fired before ready is set - unless you call 'Stop' manually before it is or something else like scene unloading / OnDestroy occurs before the component is ready
    - I suspect something like this might be cutting off your clip
    - but even in that case Play should have been already started for the clip to contain least some data so i'm not entirely sure how are you calling this
    (that's why also logs would be helpful)

    if there's an error that can happen before it's set (and you should basically restart the whole process gracefully (this might not be always possible though this shouldn't be a concern here since in that case really basic stuff isn't working))

    I added this (missing)
    Code (CSharp):
    1.  
    2.         public void Play()
    3.         {
    4.             if (!this.ready)
    5.             {
    6.                 LOG(LogLevel.ERROR, "Please check for 'ready' flag before playing");
    7.                 return;
    8.             }
    to the start of Play() at line ~800 AudioStreamBase.cs
    you can at least verify if you're not calling it prematurely
     
    TigerHix likes this.
  34. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    just to clarify: this is something you should add in your project - it's not in any public script right now..
     
    TigerHix likes this.
  35. TigerHix

    TigerHix

    Joined:
    Oct 20, 2015
    Posts:
    69
    Okay, this makes more sense now. Thanks so much for the quick turnaround! I will update my code based on this. :)
     
    r618 likes this.
  36. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    this is due to FMOD incorrectly reporting length of the file ( so sending the file was very well worth it @TigerHix )
    even their own native example with simplest possible playback of a sound/file reports it 2x as long and 2nd half being just silent when it's played back....
    I will provide a workaround for this in RuntimeImport component but e.g. direct playback of the file is probably completely pointless to fix tbh
     
  37. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    3.2.1 update submitted ~

    ===============================================================================
    3.2.1 042023 180k
    Updates/fixes:
    - AudioStreamBase : 'continuosStreaming' is automatically ignored also for AudioStreamRuntimeImport
    - AudioStreamRuntimeImport : for some (MPEG) files for which FMOD incorrectly reports their length and their redudant parts are decoded as stream of 0s
    these frames are checked before writing to an AudioClip and aren't included in imported clip
    : added new 'OnSamplesCreated' Unity event which is invoked immediately before clip creation with samples to be set for AudioClip
    this also helps in project with Unity audio disabled where AudioClip is not created, but samples can be still retrieved

    - projects with Unity audio disabled: fixed several instances relying on Unity audio properties which can be derived from FMOD directly, so projects with Unity audio disabled can run properly now
    see this also mentioned in Documentation

    Demo scenes/assets:
    - copyDemoScenesToBuildSettings : added more explanation to first README/Docs so it's more noticeable
    : the checkkbox now also controls copying all demo assets into 'StreamingAssets\AudioStream'
    toggle is present on 'Demo\_Support\Editor\AudioStreamDemoMenuDef' scriptable object
    - demo scenes with OnPlaybackStopped : was not correctly set in handful of scenes due to its recent change - this was updated
    - RuntimeImportDemo / AudioStreamMemoryDemo : new 'OnSamplesCreated' example usage
    - AudioClipChannelsSeparationDemo : added displaying of progress of processing the AudioClip

    !! IMPORTANT !!
    - since some functionality changed places (assembly), please DELETE previous version of the asset (whole 'AudioStream' folder) before importing new update
    - since asset contains native mixer plugin this might not be possible with projects opened in the Editor -
    with project closed, delete 'AudioStream' folder normally via OS filesystem, reopen it in Editor and import it afterwards
     
  38. Careprod_DevTeam

    Careprod_DevTeam

    Joined:
    Dec 29, 2020
    Posts:
    7
    Hello, I have a question about your asset, I just tested the AudioStreamDemo -> AudioStreamInputDemo, is it possible with your solution to record from multiple input device simultaneously and mix it all in one audio (my goal is to get the data from this merge, put this in an AudioStreamTrack and use it for webrtc purpose) ?

    Thank you.
     
  39. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    I would have to script this
    But if you put AudioSourceCaptureBuffer on both of your inputs and gather them on 3rd audio object then you can aggregate them there in OnAudioFilterRead callback
    see also e.g. AudioStreamInputChannelsSeparationDemo where it goes in kind of opposite way (from playing clip/input to individual objects), or any ChannelsSeparation demos

    note if you don't use asset's Input component (but just Unity Microphone) this is just Unity scripting, you don't even need FMOD for this

    also note that I don't have any experience with webrtc so I can't say if result of this is directly usable with it - but it's just Unity PCM audio buffer so I would guess so
     
  40. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
  41. Careprod_DevTeam

    Careprod_DevTeam

    Joined:
    Dec 29, 2020
    Posts:
    7
    Hello, sorry for the delay, I was working on other things. I just tested your script and it works like a charm, I also took the "data" from OnAudioFilterRead in InputsAggregation.cs and sent it via webRTC. It looks like the multi input + webRTC part works, your scripts were really helpful, thanks again! However, now I'm stuck because I can't find how to use the data from the webRTC and send it to multiple outputs.

    If you have some advice for the multi output from a float[] data, I'll be happy to hear it!
     
  42. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    mentioned here:
    i'm not sure what you want to do with the split afterwards but this way you'll get individual channels at least if that's what you're after
     
  43. Careprod_DevTeam

    Careprod_DevTeam

    Joined:
    Dec 29, 2020
    Posts:
    7
    Hello, sorry for the new delay, my last message wasn't very clear so here is my problem with more details:

    I can record my voice in realtime, send the audio in my webRTC connection, receive the audio from the other end of the connection and listen to the audio using my current audio output device (my headset). Now I still want to listen to my audio data (my voice) but I want to play it on my headset AND for example, an audio speaker.

    Same audio data but different audio output devices.

    When I use the unity webRTC package, I simply provide an "audioSource" and it plays the received audio on this audio source. I tried to get the raw data from this audioSource using OnAudioFilterRead(float[] data , int channels) but, even if I get these data, I don't know how to play them on another output device (ex: my audio speaker).

    I hope it is easier to understand, but the subject is quite complex for me.

    Thank you.
     
  44. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    For desktops first look at and examine AudioSourceOutputDeviceDemo and OutputDeviceUnityMIxerDemo scenes in 'Output devices' section
    You can either
    - place AduioSourceOutputDevice component on the GO with audio, or
    - use mixer with mixer effect with the GO with audio

    / both approaches will be similar performance wise since you need to route to only one other output (except the default one which will be played automatically)

    In either case: the single relevant thing here is a GameObject playing your audio - it doesn't matter where it did come from -
    - as starting point I recommend creating a new empty scene with e.g. single AudioSource plying some audio/clip on loop and setting up the rest of the asset's components there first

    lmk if you make this work !
     
  45. Careprod_DevTeam

    Careprod_DevTeam

    Joined:
    Dec 29, 2020
    Posts:
    7
    Thanks for the answer, I tried your AudioSourceOutputDevice script on a GameObject with an AudioSource + AudioClip and it works, but it won't work in my case.
    On my Audio Source that plays the audio received from webRTC, I don't have any audio clip:

    upload_2023-6-2_15-28-1.png

    With this GameObject without the AudioSourceOutputDevice, I hear my voice from the webRTC connection but only on my main audio output (my headset).

    With this GameObject AND the AudioSourceOutputDevice, I hear my voice from the webRTC connection (even if the audio source's volume is 0) but when I try to change "Output Driver ID", it does not switch the audio output.

    It's why I was thinking about getting the raw data played by this AudioSource and injecting it into AudioClips in other AudioSource but I don't know how to do it. I will now look into OutputDeviceUnityMixerDemo to see if I can use it.

    Thanks for the new approaches!
     
  46. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    you don't need an AudioClip - for test scene I mentioned it as a test source
    one thing I forgot to mention: make sure the script is last one - at the bottom - in the inspector - which seems to be the case
    and 'Web RTC Peer' script produces the audio in its OnAudioFilterRead callback - i.e. it writes (received) audio to data[] buffer there and doesn't change it ( that's how the audio is/can be shared between components on the same GO )
    - the AudioSource has to be started via .Play() - my script does it in Awake, but RTC script might be overriding this maybe
    - the AudioSource set on the RTC peer should be the same game object (again seems to be the case)
    / .everything above applies to mixer output / plugin too

    I have no idea how the Web RTC package is built and supposed to work so my guessing is limited here..
     
    Last edited: Jun 2, 2023
  47. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    as i reread this now - I probably mislead you with ''Web RTC Peer' script produces the audio in its OnAudioFilterRead callback' tbh
    - the whole point is it doesn't matter _how_ the sound is produced, AudioSourceOutputDevice depends only on OnAudioFilterRead - nothing else [it doesn't use any AudioClip injections]
    - if the Web RTC produces its own AudioClip and plays it, this callback should still be invoked by Unity - I *think* there are cases where this might not be true, but it's hard to pinpoint this w/o actually knowing what RTC component does
    in either case *if* you can configure it so that OnAudioFilterRead is used by Unity audio on that component, AudioSourceOutputDevice will work automatically with it
     
  48. wieger_

    wieger_

    Joined:
    Dec 27, 2020
    Posts:
    4
    Hello everybody,
    I'm trying to get external audio inputs into my unity project and according to the documentation: "To stream audio from any available system recording input - just attach an AudioSourceInput component to an empty game object.."

    But there is no AudioSourceInput component.. Am I doing something wrong?

    I'm using AudioStream 3.2.1

    all the best
     
  49. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    @wieger_ can you see the component in 'Add Component' popup on game object ? Screenshot 2023-06-16 at 01.15.16.png

    do demo scenes work when you press Play in Editor ?

    also if you do it from scratch make sure to attach 'AudioSourceMute' beneath it afterwards otherwise you'll get feedback immediately with default settings when running the scene
     
  50. wieger_

    wieger_

    Joined:
    Dec 27, 2020
    Posts:
    4
    Ah yes, I was looking for a different name. Audio Stream In is there (the documentation mentioned AudioSourceIn). Demo scenes work. Thanks!! I'm diving in: )