Search Unity

AudioStream - {local|remote media} → AudioSource|AudioMixer → {outputs}

Discussion in 'Assets and Asset Store' started by r618, Jun 19, 2016.

  1. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    hi, it should if Unity + FMOD Integration works correctly on Linux - component for this for game objects is just C# scripts
    (optional native mixer plugin is not available on Linux)
    there will be some latency, I recommend running demo app e.g. on windows to get a feel how it behaves
    -- I did a linux test initially but it was few years ago now so don't have up to date info though
     
  2. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    hi short version is that i don't recommend doing this with Unity - you might get like 80-90% there but the end result will still not feel right on the platform
    -- there is some buffering support, downloading via UnityWebRequest and it - mostly - can extract album artwork from files, but the rest like variable playback speed and captioning is not there;

    i'm not sure how captioning works exactly - it should be doable if it's timestamped extra (meta)data present in the file/stream - if it's just tags their changes should be emitted w/ UnityEvent
    - variable playback speed can be approached (not with stellar results) just by chaining the pitch, something like throwing away samples is more invasive and would require probably non trivial changes
    there are more sophisticated ways of doing it - like e.g. https://www.surina.net/soundtouch/ - but this is not included
     
  3. frankcarey

    frankcarey

    Joined:
    Jul 15, 2014
    Posts:
    14
    Hi, I'm having a weird issue with AudioSourceOutputDevice where I have a custom script that provides a menu to select the audio output from a list. It works fine for a few minutes, but after about 1-2 minutes of triggering various short audio files to play there is always a point where the secondary audio output just stops working. The component values do not change as far as I can tell, and I've added callbacks to see if OnRedirectStopped() or OnError() are happening. Neither seem to be. The kicker is if I go back into the selector menu and click away from the output I want and then back to it, it works again for a little while. Any suggestions where I can put a breakpoint to help debug this?

    Edit: Also I've set the LogLevel to DEBUG and don't see any messages come through when it fails.

    Edit2: Note that I have the AudioSourceOutputDevice set on the main Camera.
     
  4. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    hi!
    can you verify what DSP buffer size is set to in Audio settings ? Try to ease to something else if it's on Best latency
    can you PM me the project/link - or relevant part - otherwise if possible ? I'll have a look (it's nearly impossible to tell where/what to debug, fmod just silently failed on me too at times)
    Thanks!
     
  5. frankcarey

    frankcarey

    Joined:
    Jul 15, 2014
    Posts:
    14
    Yes, that seems exactly what's happening. I had turned the latency setting down to 1, but after setting it to like 40, the issue seems to have stopped so far. Were you able to find a way to be notified from FMOD of it failing eventually or do you have a ticket with FMOD I could track?
     
  6. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    not really, just made sure the code behaves as conservatively as possible; these started occurring when changing mainly DSP buffers to something else than what FMOD considers as default for given output/device
    you can turn on some FMOD builtin diagnostics by uncommenting FMODSystemsManager.InitializeFMODDiagnostics in code (you might need debug libraries - the ones ending with 'L' for all traces to work I think - not sure if they're loaded in the editor by default though, haven't checked this with recent FMOD versions)

    you mean the latency slider on the component - couldn't quite place it at first!..
    yes, 1 is quite not a good idea tbh - i should probably restrict this somehow since it's virtually impossible to determine lower bound automatically I think

    in any case my best advice is to stick w/ value which works now; I'd be careful with anything much below 50ms though which should be ok for most consumer stereo setups
     
  7. AaronMahler

    AaronMahler

    Joined:
    May 19, 2021
    Posts:
    8
    Do you still need a lot of audio sources? I think you need podcast this time which I recently subscribed https://www.podcasthowto.com/best-dollop-episodes/ You can try for it. I hope you will enjoy.
     
  8. hadrien23

    hadrien23

    Joined:
    Oct 9, 2019
    Posts:
    12
    Hi, I'm using AudioStream to stream the microphone and scene audio to a web server.
    The mic is captured with AudioStreamInput2D, muted locally with an AudioSourceMute component so it can't be heard locally. The mic signal is then merged to the scene audio and the resulting signal is streamed to the server.

    The code to merge the mic looks something like this.
    Code (CSharp):
    1.  
    2. private void OnAudioFilterRead(float[] data, int channels)
    3. {
    4.     [...]
    5.     var signals = data.ToArray();
    6.     var length = signals.Length < mic.captureBuffer.Length
    7.                         ? signals.Length
    8.                         : mic.captureBuffer.Length;                      
    9.     for (int i = 0; i < length; i++)
    10.     {
    11.          signals[i] += mic.captureBuffer[i];
    12.     }
    13.     [...]
    14. }
    15.  
    This works well on Windows, but on Oculus Quest the mic sound is very choppy.

    I did a test to unmute the mic, so the mic can be heard in Unity player, and disabled the signal merging to isolate the problem: the mic is still choppy on Quest, unless I set useUnityToResampleAndMapChannels to true.

    So the mic is much better on Quest with that settings set at true, but now the problem is that when I switch back to my original setup for streaming the audio (mute the mic locally and merge the signal), the microphone pitch is much higher.

    I tried disabling useAutomaticDSPBufferSize too but the sound is choppy as well.

    Not sure what to do from here... Is there a way to correct the pitch when setting useUnityToResampleAndMapChannels to true? Maybe something to do with the way the mic signal is merged to the scene audio. Also, do you see a different approach to achieve my goal, other than muting the mic locally and merging the signal? For instance, I tried using Unity Audio Mixer groups but I don't see an API to retrieve the audio of a specific group.

    Any help is appreciated, thanks!
     
  9. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    hey @hadrien_n , your approach is OK! - only I would avoid boxing in the OnAudioFilterRead callback (data is already an array - no need to recreate it)
    look at 'Resample Input' setting on the component - that changes the pitch and it's probably not behaving correctly here / i'm not sure how many input/output channels there are on oculus device /
    - it was added for some more advanced interop with different packages but when off the mic signal should be processed unchanged only by audio source
    if that fails please set log level to INFO on the component and send me the log when running on device
    Thanks !

    and sorry about choppy sound when using the Speex resampler (with useUnityToResampleAndMapChannels off) - it doesn't really handle more than 2 channels properly
     
  10. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    you'd need a separate audio mixer plugin to do anything which touches actual signal there
     
  11. hadrien23

    hadrien23

    Joined:
    Oct 9, 2019
    Posts:
    12
    Hey thanks for the quick reply. I realized after your response that I didn't have the latest version - didn't see the ResampleInput setting on my side. So after updating and fixing the components setup (needed to add an AudioSourceCaptureBuffer in addition to the AudioStreamInput2D component, in order to access the captureBuffer property)... it's now much better! No more choppy sound.

    I'm building with 'Automatic DSP Buffer size', 'Resample Input' and 'Use Unity To Resample And Map Channels' all set to true.

    I find the mic still a bit saturated so I'm going to look into that next, but wow that's a relief! Maybe adjusting the gain can help reduce the saturation a bit, I'll see.

    Regarding copying the float[] data array in the code above (var signals = data.ToArray(), I'm doing this because I don't want to add the mic signal in the data array directly, otherwise we would end up hearing the mic sound in the player. But there is probably a better way to do this, maybe by reusing the same array instance instead of creating a new one each time.

    Anyway, thanks for the help. Cheers!
     
  12. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    that checks out!
    (i'm not sure right now but it's possible something was fixed meanwhile too)

    you can attach normal unity monobehaviour audio effects components on the game object btw - just make sure to get the result only in the last step - so with AudioSourceCaptureBuffer at the bottom in the inspector - and it should work

    makes sense then!

    good to know it's working
    Best!
     
  13. ROBYER1

    ROBYER1

    Joined:
    Oct 9, 2015
    Posts:
    1,454
    Is it possible for me to use this plugin to mix game and microphone audio/sound together and redirect that to a plugin like Natcorder's recording to save that audio with a screen recording?
     
  14. ROBYER1

    ROBYER1

    Joined:
    Oct 9, 2015
    Posts:
    1,454
    Trying to do this also to get game and mic sound combined in a recording within a video call app - did you find a solution to this? @rickomax
     
  15. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    you can use all Unity audio callbacks normally, so in this case this means you can mix mic input with an AudioSource/Listener in OnAudioFilterRead for example - see e.g. above how hadrien_n did it
    you need to script this though, there's nothing automatic about this

    as for Natcoder they would have to support passing along custom PCM data - or in other words properly plug into Unity audio system - which I have no idea and experience with unfortunately
     
    ROBYER1 likes this.
  16. ROBYER1

    ROBYER1

    Joined:
    Oct 9, 2015
    Posts:
    1,454
    Hi @hadrien_n , I am trying to do something similar, did you have a full script implementation of this you would be able to share if possible please?
     
  17. Karsten

    Karsten

    Joined:
    Apr 8, 2012
    Posts:
    187
    @r618
    Hi
    would I be able to stream an 8 Channel audioclip (max in Unity afaik) out of the box?
    I imagine how to do this technically (ofc bandwith must be available) but i ask if this is already possible so I know beforehand if I have to develop such on my own after buying the asset?
     
  18. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    Hi, you'd have to unfortunately !
    It uses OPUS codec from here : https://github.com/lostromb/concentus : currently to un/pack audio for smaller footprint over network but that is limited to max. 2 channels (and selected samplerates); though it can push uncompressed PCM data to Icecast, too.
    PCM can be scripted (you'd just skip en/decoding steps), but that would be quite heavy for bandwidth for sure (not sure if usable at all..)
    I'll have a look if it's reasonably doable via FMOD codec support, and let you possibly know; (no promises though, am not even 1oo% sure if it's viable right now)
     
  19. florentRaffray

    florentRaffray

    Joined:
    Feb 10, 2021
    Posts:
    7
    Hi I'm wondering if this could asset could help in a specific outcome I would like. I've been exploring audio reactive pieces in unity and it's been a lot of fun: https://www.instagram.com/lost_atoms/
    I would like to extend this concept to hopefully be able to visualize streaming music from apple music and/or soundcloud in an ios app. Could this tool get streaming data from those services and pass it through the unity function GetSpectrumData() or some other similar fft based audio analysis?
     
  20. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    hi iOS is poorly supported with UnityWebRequest ( which you would use to reach any internet service ), but Soundcloud is reachable via old AudioStreamMinimal I think since that always worked for some reason despite the fact that *Minimal doesn't handle HTTPS properly... )
    All components plug seamlessly into Unity audio source (so you can use their methods), but Apple Music can't be reached via normal HTTPS requests directly anyway (for obvious reasons)

    So overall - secure https requests (with the exception of Soundcloud possibly..) don't work on iOS (only plain htttp/minimal components) and you'd need proper iOS specific implementation to access apple music (and i'm not sure about access to actual media PCM content which you would need for anything unity audio source related) - which the asset unfortunately doesn't provide
     
  21. whiteaangel208

    whiteaangel208

    Joined:
    Jun 14, 2018
    Posts:
    2
    Greetings. I got an asset for playing sound from the server. In the description I saw information that when using the AudioStreamMinimal component, the sound will play even with the screen off, but when I turn off the screen, the sound loops at one point. I thought that I was doing something wrong and downloaded the demo build to test it, but it’s the same, when the screen turns off, the sound loops. Tell me what could be the problem? Android 10.
     
  22. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    hi, appreciate you tested the demo too!
    but the bad news is the situation on android is rather.. random to say the least
    i currently don't have means to test on 10 and the only correct way to implement this is indeed via service only
    / if you make this work by modifying activity lifetime via gradle build lmk but this way is very intrusive and should not be recommended anywhere, it's the easiest solution unfortunately /
     
  23. whiteaangel208

    whiteaangel208

    Joined:
    Jun 14, 2018
    Posts:
    2
    Thanks for the answer. I'll try to figure it out.
     
  24. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    I just submitted an update for review -

    ===============================================================================
    2.5 112021 No/v

    Updates/fixes:
    - AudioStreamDevicesChangedNotify : significant updates and fixes for outputs/inputs notification component, and all components which use output/record driver id and all demo scenes which display and access system outputs and inputs
    - more user friendly messaging for redirection running
    - excluded DivideByZeroChecks Il2CppSetOption too for callbacks
    - completely decoupled iOS parts of the asset from the rest, so AVAudioSession output/input can be used separately if needed (i.e. no FMOD dependency required)

    demo scenes:
    - more scoll areas
    - updates devices lists + selection correctly based on changed to notification

    - Several bugfixes and refactors, #define updates to reflect changed in recent Unity versions
    - Updated documentation and tooltips where appropriate

    - tested with FMOD 2.01.11 (latest Unity Verified),
    - native plugins compiled against FMOD 2.01.11


    a PSA/WARNING: It looks like sounds created via FMOD networking (i.e. all Legacy components which stream from network) fail with 'ERR_FILE_EOF - End of file unexpectedly reached while trying to read essential data (truncated?).' when opening URLs which worked before.
    I'm not sure when this started happening, but it's in 2.01.07 already (and the FMOD provided core API network streaming example fails with the same error w/ a common MPEG stream URL which worked before).
    I've replaced legacy components in scenes which were using them as leftover, but at this point there's probably nothing I can do until FMOD updates it's networking - sorry about that !

    / [ streaming from network would require new asset without FMOD dependency, so maybe split to two assets - will try to figure something out ]
     
    jGate99 likes this.
  25. jGate99

    jGate99

    Joined:
    Oct 22, 2013
    Posts:
    1,945
    Hi @r618
    There is a recent plugin launched on asset store that lets play audio even when unity app is in backgrouund for both iOS n Android, it'd be great if you provide that support so we dont have to use another plugin just for that
     
  26. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    feel free to DM the link I'll take a look
    but my asset currently doesn't provide streaming on iOS and as per PSA in my previous post even the old component doesn't work for link/s which previously did (it might because format of the radio I use for testing changed, not sure) - fixing the former would require significant effort right now and as for latter - until /and if/ FMOD fixes this there's no point of using it on iOS for audio streams really
    so overall it really doesn't make sense to continue w/ FMOD for networked streaming at its current state I think

    if I intend to fix this it would be with another new asset most likely only - to keep streaming separated (i should probably come up with some upgrade scheme for this for owners of current asset)
    - the current one would then be still based on FMOD but intended for audio devices/input management only
     
    jGate99 likes this.
  27. jGate99

    jGate99

    Joined:
    Oct 22, 2013
    Posts:
    1,945
    i'll gladly pay for a new asset for full price if it let me play a streamed audio (podcast) both in unity and while unity is in background on iOS + Android.
     
  28. MaxQuinonesSantander

    MaxQuinonesSantander

    Joined:
    Feb 20, 2020
    Posts:
    11
    Hello ! i just start with this Audio stream. =) I would like to kneo how can i transmit inside a source from soundflower or direct from audiomovers. thanks
     
  29. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    Hi, please have a look at for example AudioStreamInput2D demo scene - that will allow you to pick from an input present in your system and record from it
    script in the demo scene shows how to access and process audio data from chosen source via game object
     
  30. tbqoxmf

    tbqoxmf

    Joined:
    Sep 30, 2018
    Posts:
    2
    Hi @r618
    Can I get serializable audio data per few frames?
    send microphone voice and windows sound like voice chat
    but I don't want make a voice chat
    I want to emphasize get data between few frames and play sound with Deserialization data
    My English is not perfect, but I hope what I said goes well
     
  31. Fatamos

    Fatamos

    Joined:
    May 27, 2020
    Posts:
    12
    HI @r618
    So I've been working on this app for quite some time now. It is a sound recorder for a museum and it is supposed to record an instrument and later that record could be played again after the recording is finished. I checked the AudioStreamInput2D demo scene but the whole thing and the scripts confuse me a little bit. Isn't something like this supposed to be simple? I am quite confused and unsure what method I should call in my scene, which method does recording and which method plays the record back again... If you could point me in some direction, I would appreciate it... Thank you :)
     
  32. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    Hi, if I got you correctly you want to combine microphone and system/windows audio into one signal, save/serialize it and then later play it back
    so for combining two sources please have a look at e.g. IcecastSourceDemo how two sources - here microphone and AudioSource are placed in a scene - the resulting signal is sent (optionally encoded) to some icecast server - which you don't need, you want to rather save it instead
    in this demo the signal is captured on the main listener - so the whole scene audio is captured
    you'd have to save/serialize it by yourself though - there's no automatic built-in serialization
    you can use 'GOAudioSaveToFile' script, attach it on the main listener and it will save scene audio into WAV file by default in StreamingAssets on desktop
    there are other ways but this is probably the easiest / i recommend copying demo scene, removing all icecast/demo parts and use the rest with saving script /
     
  33. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    Hi, this is similar to the very previous question, in fact it's almost identical )
    first of all the components themselves don't have any serialization support - you have to save any signal you want to manually
    the demo shows how to start/stop recording as in capturing audio data from device (and play it back immediately if you unmute output)
    to save the audio you too can attach 'GOAudioSaveToFile' to the microphone object and let it be saving the component/microphone audio, there's one caveat with working with components like this - note 'AudioSourceMute' script attached to the mic object in the scene - this has to be _last_ component on the GO - i.e. before GOAudioSaveToFile component - attached at the bottom - / otherwise you'd get only silenced signal saved into a file /
    / again I recommend copying demo scene, removing all not needed demo/parts and make user script/s to handle this/
    let me know if you make this work!
     
    Last edited: Nov 10, 2021
  34. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    note this then uses filesystem for captured data - it was not entirely clear from your description what exactly was the use case -
    it's possible to create an audioclip at runtime with captured data and you can combine arbitrary OnAudioFilterReads in user script - you can use e.g. AudioSourceCaptureBuffer to expose their buffers and combine them - then either save or make buffer for new fixed length audiosource
    saving a file would be probably the easiest though, let me know if that's the case
     
  35. Fatamos

    Fatamos

    Joined:
    May 27, 2020
    Posts:
    12
    Alright, thank you! The GOAudioSaveToFile saves the audio to the file, I would like to save it temporarily, once the user goes back to the main page of the app, the record gets deleted, so the memory doesn't fill up. Is there a script that handles this type of work or should I tweak the script so the record gets deleted?
    Thanks for your time and for staying active on the forum!
     
  36. MaxQuinonesSantander

    MaxQuinonesSantander

    Joined:
    Feb 20, 2020
    Posts:
    11
    Hello everybody
    I would like to get the Stream url from Audio Movers. Its posible?
     
  37. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    please use common .NET File/Directory ways to manage saved recordings
    the audio files are named following the template 'SoundRecording_gameobjectnameORaudioclipname_dateinfo...wav'
    they're saved in Application.streamingAssetsPath on desktop and in Application.persistentDataPath on mobiles
    source in StartSaving method in GOAudioSaveToFile component
    so you can e.g. Directory.GetFiles from directory via searchpattern and sort by creation date to get last recording which you can then delete - or any other suitable method, check if there's only one/none and so on...

    it's literally just a file on filesystem after the recording is finished - the file needs to be closed - done in public StopSaving and OnDestroy, so if you leave the scene it will be saved, otherwise you can call StopSaving manually and it should close the file
    // there's small chance it can throw exception in audio callback now that i'm looking at it but those last few audio frames after you call the StopSaving should be just ignored if that happens
    - if not please let me know -
     
  38. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    So it looks like they access their audio feed in unknown mysterious way and direct media link is not accessible for browser user easily
     
  39. Fatamos

    Fatamos

    Joined:
    May 27, 2020
    Posts:
    12
    I managed to save recording. I made separate button to listen the recording. Also, I've set the default file name to "rec.wav" so every time new recording overrides the previous one. Though, I had to remove the Audio Listener component from the MainCamera and added it to GOAudioSaveToFile game object that is referenced to another AudioController script that handles the recording and audio listen function. I'm not sure if that is the correct way to handle the audio but it's working.
    Here's my AudioController script that handles the recording and audio preview:
    Code (CSharp):
    1.  
    2.     public class AudioController : MonoBehaviour
    3.     {
    4.         [SerializeField] private AudioStreamInputBase _audioStreamInputBase;
    5.         [SerializeField] private Button _recordButton;
    6.         [SerializeField] private Button _listenButton;
    7.  
    8.         [SerializeField] private GOAudioSaveToFile _goAudioSaveToFile;
    9.  
    10.         public const string _audioName = "rec.wav"; //file name defined in GOAudioSaveToFile line 111
    11.         public AudioSource _audioSource;
    12.         public AudioClip _audioClip;
    13.         public string _audioPath;
    14.  
    15.         private bool state;
    16.  
    17.         private IEnumerator LoadAudio()
    18.         {
    19.             WWW request = GetAudioFromFile(_audioPath, _audioName);
    20.             yield return request;
    21.  
    22.             _audioClip = request.GetAudioClip();
    23.             _audioClip.name = _audioName;
    24.  
    25.             PlayAudioFile();
    26.         }
    27.  
    28.         public void PlayAudioFile()
    29.         {
    30.             _audioSource.clip = _audioClip;
    31.             _audioSource.Play();
    32.             _audioSource.loop = false;
    33.         }
    34.  
    35.         private WWW GetAudioFromFile(string path, string filename)
    36.         {
    37.             string audioToLoad = string.Format(path + "{0}", filename);
    38.             WWW request = new WWW(audioToLoad);
    39.             return request;
    40.         }
    41.  
    42.         public void Start()
    43.         {
    44.             _recordButton.onClick.RemoveAllListeners();
    45.             _recordButton.onClick.AddListener(() =>
    46.             {
    47.                 if (!state)
    48.                 {
    49.                     RecordAudio();
    50.                     state = true;
    51.                 }
    52.                 else
    53.                 {
    54.                     PauseAudio();
    55.                     state = false;
    56.                 }
    57.  
    58.             });
    59.  
    60.  
    61.             _listenButton.onClick.RemoveAllListeners();
    62.             _listenButton.onClick.AddListener(() =>
    63.             {
    64.  
    65.             StartCoroutine(LoadAudio());
    66.             });
    67.  
    68.             _audioSource = gameObject.AddComponent<AudioSource>();
    69.             _audioPath = "file://" + Application.streamingAssetsPath + "/Sound/";
    70.         }
    71.  
    72.         public void RecordAudio()
    73.         {
    74.             _audioStreamInputBase.Record();
    75.             _goAudioSaveToFile.StartSaving();
    76.             Debug.Log("starting saving");
    77.         }
    78.  
    79.         public void PauseAudio()
    80.         {
    81.             _audioStreamInputBase.Stop();
    82.             _goAudioSaveToFile.StopSaving();
    83.             Debug.Log("Stopped saving");
    84.         }
    85.     }
    Later I will add the Slider component so the user can preview the recorded footage. Also, I'll add if the "Listen" button is clicked while recording, the audio is automatically saved and the audio will start playing immediately. Basic UI concept
     
  40. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    :thumb_up:

    you can put AudioListener wherever esp. for 2D sounds
    (for 3D) - it's usually on main camera since that's the point of view of the user and by extension also point of hearing of the user
    the saving script requires special audio buffer to be running on game object it is attached to
    this is either AudioSource on the game object (you can be saving only specific source), or AudioListener provides it, in which save entire scene audio is saved, as you have it now
    /just 3d sounds might not sound positioned in space correctly when the listener doesn't correspond to player view../
     
  41. Pazyed

    Pazyed

    Joined:
    Jul 1, 2021
    Posts:
    4
    Hi,
    I want to determine which speaker will play in a given runtime using code. One speaker is wired and the other is connected via bluetooth. I found your asset suited for the job and tried the demo, and it worked when I tried it using GUI.

    After purchasing the asset, few errors and complications appeared:

    1. After importing the asset to my project, I got an error that there are two libraries that exists in a different plugin in my project (with a newer version of those libraries) - NetMQ and AsyncIO. I changed their name in the AudioStream plugin, as a temporary solution.
      What solution will constantly solve it?
    2. How can I choose different output devices using code and not GUI?
    Thanks,
    Paz
     
  42. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    Hi !
    As for what would permanently solve this - Unity having complete support for nuget packages or something equivalent, I suppose
    // I could manually move used libraries into a separate new namespace - which but sounds just wrong and would probably breach licensing too
    -- I'll have a look at asmdefs to see if they allow changing public assembly namespace
    they're not used because asset works/worked in Unity version which never had them and once they're used in the project the whole project would typically have to structured around asmdefs
    it's different now with newer Unity versions where they're used more often, but it nevertheless changes the situation

    If renaming things worked for you please keep it for now !

    look at how demo scene script does it from GUI ( I recommend copying the whole demo scene and adding your scripts, device itself is switched simply by calling SetOutput on the component; you'd need two instance - one for each output - and manage them individually I'd imagine;
    - look how devices changes events are used in the demo scene
    - see Documentation for BT on iOS; BT devices should work without issues otherwise

    Thanks !
     
  43. Pazyed

    Pazyed

    Joined:
    Jul 1, 2021
    Posts:
    4

    I could not find from the demo, how can I switch the audio playing from 2 different speakers using code only (how to get each speaker play in a given time, determined in advanced) and not by using GUI.
    Can anyone please guide me how to do so?
     
  44. oculartech

    oculartech

    Joined:
    Jan 9, 2020
    Posts:
    2
    I'm trying to get the raw audioclip data from an AudioClip with an AudioStreamInput attached, but all the samples values are 0.
    Code (CSharp):
    1. int sampleRate = 44100;
    2. float[] allSamples = new float[sampleRate];
    3. audioSource.clip.GetData(allSamples, 0)
    audioSource.GetOutputData() does work (like in the AudioStream demo) and gives the spectrum, but the audio clip seems empty. I'm missing something?

    Edit:
    at first, I got this error:
    Cannot get data from streamed samples for audio clip "IN 3-4 (BEHRINGER UMC 404HD 192k)". If the audio clip was created via AudioClip.Create and no PCM read callback was provided, the 'stream' argument must be false. For a disk-based AudioClip changing the load type to DecompressOnLoad on the AudioClip will allow modification of the data

    So I changed the stream parameter in the AudioClip.Create to false, when I set it to true, I see that GetAudioOutputBuffer() is generating data, but It doesn't seem to be passed to the audioClip
     
    Last edited: Dec 14, 2021
  45. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    Hi, although documentation [https://docs.unity3d.com/ScriptReference/AudioClip.GetData.html] states that the returned array will be filled with zeroes for compressed clips, it behaves the same for streaming clips too - which is this case
    [i create small streaming clip for input/mic buffer - GetOutput/SpectrumData work because they operate just on 'current' portion of the audio buffer]
    you wouldn't be getting meaningful audio if you were to be using GetData from Update anyway since it can't (by design) be in sync with audio thread
    in this case OnAudioFilterRead will work for accessing input data and depending on what you want to do with it you'd have to be progressively saving it / processing as needed
     
  46. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    Greetings users, just submitted 2.6 for review -
    (mainly fixed updated Resonance interface, but there's a new demo + some small overhaul for playing audio when downloading it - hope you'll find it useful!)

    2.6 122021 /|\

    Updates/fixes:
    concerning mainly latest official FMOD 2.02.04 release for Unity/Asset Store

    - fixed AudioStream (main audio streaming component) behaviour - should now be playing/streaming correctly without running out of data
    - updated Resonance interface + demos (no longer compatible with previous FMOD versions)
    - updated mixer effect plugins w/ FMOD 2.02.04
    - small stability fixes

    New:
    - AudioStreamDownloadRealtimeDemo : shows how to handle progressive playback of audio while it is being downloaded from a netradio
    - new 'RuntimeSettings' configuration object:
    - ScriptableObject in 'AudioStream\Scripts\AudioStreamSupport\Resources'
    - has user customizable field 'Cache Path' which allows entering a path to a directory which will be used as temporary storage for all components which use disk as cache
    (currently it means AudioStreamDownload, AudioStreamMemory and AudioStream when using DISK as cache).

    Tested for Unity 2022.1(beta) compatibility:
    -> please enable non secure HTTP downloads in 2022 (and up) Player settings in order to use links in the demo
    - Go to Edit -> Project Settings -> Player -> Other Settings > Configuration and set 'Allow downloads over HTTP' to 'Always allowed'. Alternatively, use your own secure (HTTPS) links only.
    - demo scenes display notice when running on 2022 and up and not having 'Allow downloads over HTTP' enabled

    Bumped version to 2.6 since it's submitted w/ 2019 LTS, too.

    Thanks !
     
    Last edited: Dec 21, 2021
  47. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    the update just went live, x33rs !
     
  48. Duke_Hwang

    Duke_Hwang

    Joined:
    Nov 23, 2014
    Posts:
    4
  49. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    if you want to emit/generate programmatic sound the asset isn't really built for that
    though AudioStreamMemory will pick up anything what's in memory - so if there are bytes in supported audio format it will play them
    - you can use OnAudioFilterRead in unity itself, in fmod you'd have to look e.g. at their core examples like dsp custom effect / just mentioning that you can access all fmod functionality using their unity package from unity's c# /

    -- i'm not sure what you're asking exactly though, so feel free to follow up to clarify if needed
     
  50. OscarCybernetic

    OscarCybernetic

    Joined:
    Sep 12, 2018
    Posts:
    23
    I updated AudioStream and FMOD for Unity to the latest recently and I noticed that some internet radio stations that used to work fine don't work anymore. When I try to use AudioStream to connect to
    http://stream.antenne.de:80/antenne
    it gives me an FMOD ERR_FORMAT error on getOpenState.
    I tried increasing the block size and multiplier, changing the audio format to MPEG. I suspect this might be an FMOD issue, but I'm not sure. Any thoughts?

    It's happening on Windows 10 Unity Editor; haven't tried making a build for any platform yet.

    EDIT:
    Did some more digging and I noticed that the Media_AsyncCancel callback is called right after CreateSound finishes. Afterwards I get the ERR_FORMAT errors

    EDIT 2: Get the same behavior when making builds
     
    Last edited: Jan 20, 2022