Search Unity

  1. Welcome to the Unity Forums! Please take the time to read our Code of Conduct to familiarize yourself with the forum rules and how to post constructively.
  2. We have updated the language to the Editor Terms based on feedback from our employees and community. Learn more.
    Dismiss Notice
  3. Join us on November 16th, 2023, between 1 pm and 9 pm CET for Ask the Experts Online on Discord and on Unity Discussions.
    Dismiss Notice

AudioStream - {local|remote media} → AudioSource|AudioMixer → {outputs}

Discussion in 'Assets and Asset Store' started by r618, Jun 19, 2016.

  1. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,272
    - same for interfaces checks in base i.e.
    while (numAllDrivers < 1)
    instead of
    while (numConnectedDrivers < 1)
     
  2. lsliwinski

    lsliwinski

    Joined:
    Sep 23, 2020
    Posts:
    5
    numAllDrivers
    also is
    0
    after calling
    this.recording_system.system.getRecordNumDrivers


    When I connect my physical microphone to the mac then both
    numAllDrivers
    and
    numConnectedDrivers
    is equal to
    1
     
  3. lsliwinski

    lsliwinski

    Joined:
    Sep 23, 2020
    Posts:
    5
  4. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,272
    thanks for testing - that's as far as it can go on macOS
    and no FMOD by itself can't do system capture since there's no way of enabling loopback/listening on audio hw in macOS

    I recommend installing Soundflower [https://github.com/mattingalls/Soundflower/releases/tag/2.0b2] - it works also on my current Big Sur beta so it should be good to go at least in some future

    this thread is worth to read but using Soundflower only you can do system audio capture:
    after installing it (Big Sur requires reboot, not sure about previous versions...)
    create Multi-Output device in MIDI Audio Setup app and select both Built-in output and Soundflower (2ch) as its sources
    - don't forget to right click / cog wheel settings on the multi output and set it to use for output

    The Soundflower(2ch) interface should be now visible in AudioStream input demo scenes (meaning you can record from it)
     
  5. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,272
    I should probably mention this in the documentation [Soundflower basically inserts itself between system driver and final output - the same way as all the others virtual drivers do]
    So thanks for pointing this out !
     
  6. lsliwinski

    lsliwinski

    Joined:
    Sep 23, 2020
    Posts:
    5
    Thanks for clarification
     
  7. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,272
    addition to the above [macOS system audio capture with Soundflower]:
    after further playing with this it looks like AudioMIDI Setup doesn't behave entirely consistently - I was able to get specific Soundflower(2ch) interface before, but no longer ^-^
    - what seems to be working is you have to have also Aggregate device created - which contains system audio output you want to be playback to be on and Soundflower (2ch) subdevices - and use _that_ for system input
    Multi-Output device has to be used for system output and also have Soundflower(2ch) as its subdevice

    What is going on I think is there has to be a capture interface to record from in the first place (that's what only Aggregate device allows) and have Soundflower's buffers to be fed with actual audio via Multi-Output (so Soundflower can internally fill them)
    I got previously into a state where I had only Multi-Output w/ Soundflower subdevice and everything worked, but apparently no longer :)

    Anyway, it's slightly messy, maybe better option is to indeed use something like https://rogueamoeba.com/audiohijack/
    [which I would test but it doesn't run yet on Big Sur, so it might be a better option once it does or you can run it now on one of the previous macOS version/s]
     
  8. idialab

    idialab

    Joined:
    Jun 9, 2017
    Posts:
    5
    Hello, I just purchased your asset to be able to change input/output devices from within the Unity build. I didn't notice there were dependencies until I got the
    The type or namespace name 'FMOD' could not be found
    error. So I went and got the FMOD for Unity asset and imported it. And now I get a few
    'StringWrapper' does not contain a constructor that takes 1 arguments
    errors. After looking a little harder, it seems that your latest release has breaking changes for FMOD versions earlier than 2.01.04, while FMOD only has their asset at 2.00.10.

    How can I remedy this?
     
    Last edited: Oct 5, 2020
  9. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,272
    Hi, the store package is (unfortunately) not entirely up to date, please get the latest official from https://fmod.com/download [an account is needed]
    - I decided to go rather with FMOD latest and be breaking compatibility, hope that's understandable
     
    idialab likes this.
  10. idialab

    idialab

    Joined:
    Jun 9, 2017
    Posts:
    5
    Ah, I didn't realize FMOD was behind on the asset store. Downloaded the latest version from the website and everything is working as expected. Thanks!

    EDIT: I see you addressed this very issue in the README. My mistake.
     
    Last edited: Oct 6, 2020
    r618 likes this.
  11. machtyyy

    machtyyy

    Joined:
    Oct 16, 2020
    Posts:
    2
    Hello, I'm considering buying an asset.
    I want to have different sounds for 4 speakers in Unity.

    1. One wav file with 4 channels.
    2. An audio interface with four outputs that support ASIO.

    I want Unity to read a file from 1 and play sound through 2 to 4 speakers.

    The environment is as follows.
    * Windows 10
    * Unity 2019.2.15f1
    * steinberg UR44c(4 output and support ASIO)

    I ran the demo, but there was no way to output it through ASIO. Does this asset not support ASIO?
    (I have confirmed that the target wav can be played from the ASIO driver using REAPER.)

    If I can sync the playback, I can also prepare four wav files with one channel and play them to each speaker. Either way, I'm assuming that the ASIO driver is supported.
     
  12. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,272
    Hi,
    It doesn't have support for ASIO out of the box, indeed
    - it can be enabled in code but given that:
    there's only single ASIO4ALL device exposed by ASIO control, which needs to be manually configured, and - more importantly - I had users reporting machines crashing with this enabled with e.g. focusrite gear
    I decided to not have it exposed to the users and in the demo for now

    So you can have a try at it at your own risk if you want, feel free to PM/email me for questions/results -)
    Cheers
     
  13. machtyyy

    machtyyy

    Joined:
    Oct 16, 2020
    Posts:
    2
    I understand the current specifications. While considering, think that there is no other way.

    Thank you for your kind and honest answer.
     
  14. WalterAnthony

    WalterAnthony

    Joined:
    Apr 10, 2018
    Posts:
    8
    upload_2020-10-26_17-8-16.png

    I mounted the "audiosource output device" component in the scene, but when I called setoutput(), the value of ready was false
     
  15. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,272
    hey @WalterAnthony - I replied via email, but I'll post it also here:

    I suppose the demo scene AudioSourceOutputDeviceDemo is working with your devices - is that correct ?
    You have to do what the error message says - you have to wait for the ready flag to become true - you're printing it in the same frame you try to set output on the above screenshot I suppose
    and - important - the mounted component has to be on game object which is enabled (for coroutines to be run)
    Look how this is done in the demo script - it's waiting for the flag in Start coroutine and UI is available only when it's set

    Let me know if this helps!
     
  16. Erandunity

    Erandunity

    Joined:
    Dec 16, 2016
    Posts:
    2
    Hello @r618 ! I've bought AudioStream and so far, I can see it really helping us with many of our projects. I'm relatively new to FMOD and low-level audio processing and want to tackle a specific scenario:

    I want to stream a multichannel audio clip (16ch) and would like to split each channel to a specific Unity AudioSource (i.e. 16 different audio sources). I need to have the spatialization and reverb effect applied to them.

    I don't mind digging into the API and write some code; I was wondering if you would have some insight as to where to start looking. Thanks a lot!
     
  17. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,272
    Hey @Erandunity, thanks for using it!
    You will have to play 16ch media through FMOD directly (see MediaSourcePlaybackDemo)
    [note if it's network streaming this is limited to FMOD networking so simple plain HTTP requests only]

    But currently there's no implemented way of playing these back through Unity audio too - the way it can be done is creating DSP capture effect to capture FMOD's output, deinterlace channels and feed each to each respective AudioSource - either via PCM or OnAudioFilterRead callback/s -
    You can see how DSP capture effect is used currently in AudioStreamBase (the stream decoded by FMOD is captured and prepared for an AudioSource) - you can basically copy this only you'd have to do channels deinterlacing if you want to split 16ch media

    So this way you can have 16 AudioSources playing each channel, you can attach effect components directly on game objects, but I think spatialization might not work when/if OnAudioFilterRead is used - you might want to try PCM read callback of the AudioClip instead (or use the mixer, but spatialization must be working on a game object first unless you're using something like Resonance mixer effect I think)

    Let me know if you have further questions ~
     
  18. Erandunity

    Erandunity

    Joined:
    Dec 16, 2016
    Posts:
    2
    @r618 Thank you for all this information, this is really valuable. I appreciate you taking the time!
     
  19. WalterAnthony

    WalterAnthony

    Joined:
    Apr 10, 2018
    Posts:
    8
    Hello, this flag problem has now been solved, and now I have encountered another one. I want two output devices of the computer to play an audio at the same time, regardless of whether it is synchronized or not.
     
  20. Kawalyn

    Kawalyn

    Joined:
    Apr 8, 2015
    Posts:
    14
    Hey,

    first off thanks for this very helpful tool, currently using it to route multiple mics to different outputs.

    Sadly i started to run into an issue last night which persists today. The longer i keep Unity open, the higher the chance that on starting my scene i get a few "Exception: Factory.System_Create ERR_MEMORY - Not enough memory or resources." errors and the plugin stops working as a result. The only thing that fixes it seems to be to restart Unity.

    I only have two AudioSourceInput2D components for my two mics, these hardly should use that much memory? Today i had the third crash in 20 minutes, so something is clearly wrong...

    Thanks for any help in advance, have a nice week!
    Kawa

    Exception: Factory.System_Create ERR_MEMORY - Not enough memory or resources.
    AudioStream.AudioStreamSupport.ERRCHECK (FMOD.RESULT result, AudioStream.LogLevel currentLogLevel, System.String gameObjectName, AudioStream.EventWithStringStringParameter onError, System.String customMessage, System.Boolean throwOnError) (at Assets/Plugins/AudioStream/Scripts/AudioStreamSupport/AudioStreamSupport.cs:100)
    AudioStream.FMODSystemInputDevice..ctor () (at Assets/Plugins/AudioStream/Scripts/AudioStreamInput/FMODSystemInputDevice.cs:32)
    AudioStream.FMODSystemsManager.FMODSystemInputDevice_Create (AudioStream.LogLevel logLevel, System.String gameObjectName, AudioStream.EventWithStringStringParameter onError) (at Assets/Plugins/AudioStream/Scripts/AudioStreamInput/FMODSystemsManager.FMODSystemInputDevice.cs:25)
    AudioStream.FMODSystemsManager.AvailableInputs (AudioStream.LogLevel logLevel, System.String gameObjectName, AudioStream.EventWithStringStringParameter onError, System.Boolean includeLoopbackInterfaces) (at Assets/Plugins/AudioStream/Scripts/AudioStreamInput/FMODSystemsManager.FMODSystemInputDevice.cs:86)
    AudioManager.GenerateInputDevicesList () (at Assets/AudioManager.cs:52)
    AudioManager.OnAudioDevicesChanged (System.String name) (at Assets/AudioManager.cs:46)
    AudioManager.InitializeAudioDeviceLists () (at Assets/AudioManager.cs:39)
    AudioDevicesDropdownHandler.Awake () (at Assets/AudioDevicesDropdownHandler.cs:51)
     
    Last edited: Nov 5, 2020
  21. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,272
    Hey @Kawalyn , do you get any errors in the console printed when *exiting* play mode ?
    If fmod system is not properly released it refuses to create new system instances after hitting some hard limit
    If you can't pinpoint where the workflow is broken feel free to PM/post full editor log
     
  22. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,272
  23. Kawalyn

    Kawalyn

    Joined:
    Apr 8, 2015
    Posts:
    14
    This seems to have fixed the issue for me, at least the RAM did not start climbing even after two hours of usage, thanks!
     
    r618 likes this.
  24. Mandelbr0t

    Mandelbr0t

    Joined:
    Mar 20, 2018
    Posts:
    1
    Hey,
    I'm interested in using AudioStream plugin for one of my projects. I'm working with Snapdragon 835 and I wanted to know if your plugin takes advantage of hardware accelerated encoding. Thank you :)
     
  25. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,272
    Hi, unfortunately no - any encoding is done in user C# scripts, so .net runtime only, sorry
    (decoding is not hw accelerated either I think or rather depends on how a file/media is being played - in any case it's done by FMOD, and I might be wrong on this)
     
  26. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,272
    / small update submitted for review - thanks all for testing !

    v 2.4.3 112020 11

    Updates/fixes:
    - AudioStreamDevicesChangedNotify : since notification callback is by default installed on system output 0, there were previously issues when system default changed as a result of device/s un/plugging
    this should be now fixed, as it tries to stop all ASODs running in the scene after 0 device change, releases 0 system and installs new callback on new default
    - MediaSourceOutputDeviceDemo : fixed timing issue with resolving asset paths and OnGUI

    - tested with FMOD 2.01.06
    - submitted with 2018 LTS as minimal version
     
  27. mrst003

    mrst003

    Joined:
    Jul 22, 2020
    Posts:
    4
    Hi, I bought this asset to send the audio from the server to the client.
    I'm trying AudioStreamNetMQClientDemo and AudioStreamNetMQSourceDemo and there seems to be a delay of about 1.5 seconds.
    Is there any way to reduce the latency?
     
  28. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,272
    Hi, I don't think anything except lower network latency ( e.g. going wired instead over wifi ) will help much
    I think cumulative encoding + decoding + audio latency on both host + client should fit well under 100-200ms (but never measured it), the rest should be packet transmission
    Have you tried setting lower bandwidth / complexity on host ? That should help with transmission too
    (frame size should play a role only for compatibility with with certain routers)
    With example setup - PC over wired connection, another machine over wifi, both connected to the same network/common router I get almost immediate response on client when e.g. changing source volume in demo scene
    Let me know if this helps
     
  29. mrst003

    mrst003

    Joined:
    Jul 22, 2020
    Posts:
    4
    I truly appreciate your quick reply.
    When I tried, there was almost no delay between server and client as you mentioned. The cause was inserting the microphone audio from another script into AudioClip.I did the process in AudioStreamLegacy and got an immediate response.
    I apologize for the inconvenience of a elementary mistake. Thank you for your kind correspondence.
     
  30. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,272
    hey that's totally fine, i was wondering if you had any recording inputs afterwards, Unity's default mic has large latency
    glad you sorted it!
     
    mrst003 likes this.
  31. nyonge

    nyonge

    Joined:
    Jul 11, 2013
    Posts:
    49
    Hi @r618! I'm looking for some audio input solutions. Can this stream audio from other applications / windows / browser tabs or anything?

    Currently looking only at Win/OSX platform, though we'd also like to deploy to web and mobile in the future. Not relevant for now.

    Ideally what we'd like is to, from our app, be able to select "stream audio from [Select Other Running Application]", eg Spotify or iTunes or Chrome/browser. It'd be amazing to be able to stream audio from a specific tab in Chrome but that may be pushing it lol. I know we can stream audio from a website, but streaming audio from other processes running on the OS is what we're aiming for.

    Also, 2nd quick question that's almost certainly been asked before. I'm a bit unclear on the asset store docs. Is FMOD required for this asset, or only if we want to use FMOD's integration + features? (Which we don't)

    Thanks!
     
  32. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,272
    hi, the asset can open only single interface for recording - meaning it will 'listen to' all audio being played on it, there is not per process/window granularity (and most likely never will be)
    [not to mention even this is something not available on mobiles, at all - at least not without going full audiobus/AU unit support on iOS which is kind of pointless for a game engine]
    - please download demo app from the store page and run AudioStreamInput2D (or AudioStreamInput) scene : the devices listed are everything the asset sees audio input wise on your system

    FMOD Unity Integration is _required_ - though only the 'Core' part of it - the asset itself doesn't use FMOD Studio functionality in any capacity

    Hope this helps!
     
  33. AndrewKlepcha

    AndrewKlepcha

    Joined:
    Jun 18, 2013
    Posts:
    3
    Hi @r618 !
    Recently I bought AudioStream to make possible some in game audio manipulations .
    I want record and save to external wav file audio from my unity game(android, ios).
    Audio is playing by my code in case of different conditions and I want make it possible to record it from time to time.

    I think that there two possible solutions:
    1. record from scene audio listener ( already tried and have bad results because of different sampling rates ? )
    2. record from audio mixer group ( is it possible with help of AudioStream modules ? )

    I read documentation and found GOAudioSaveToFile and AudioStreamOutputDevice but still have no idea in which configuration I can use them to make solution 2 work

    Could you give me some hints please?
     
  34. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,272
    there is no mixer effect for saving the audio
    stick this script
    Code (CSharp):
    1. using UnityEngine;
    2.  
    3. public class GOAudioSaveUI : MonoBehaviour
    4. {
    5.     AudioStream.GOAudioSaveToFile saveToFile;
    6.     bool isSaving = false;
    7.  
    8.     void Start()
    9.     {
    10.         this.saveToFile = this.GetComponent<AudioStream.GOAudioSaveToFile>();
    11.     }
    12.  
    13.     void OnGUI()
    14.     {
    15.         if (GUILayout.Button(this.isSaving ? "STOP SAVING" : "START SAVING"))
    16.         {
    17.             if (this.isSaving)
    18.                 this.saveToFile.StopSaving();
    19.             else
    20.                 this.saveToFile.StartSaving();
    21.  
    22.             this.isSaving = !this.isSaving;
    23.         }
    24.     }
    25. }
    to your main listener, together with GOAudioSaveToFile component, AutoStart off, 'Use This Game Object Audio' on
    when saving is stopped the audio is saved as PCM16 WAV in StreamingAssets
     
    AndrewKlepcha likes this.
  35. AndrewKlepcha

    AndrewKlepcha

    Joined:
    Jun 18, 2013
    Posts:
    3
    as I said before I have tried variant 1. (in fact with other 3rd party saveWAV component which looks identical to your solution). It's pity that there is no possibility to save directly from mixer :(
    Anyway thank you for quick reply. I will try GOAudioSaveToFile as v1
     
  36. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,272
    there might some sampling rate issue on mobiles, it will take some time until I verify everything though
     
  37. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,272
    Just to be clear | this pops up from time to time and can be searched back in this thread too | - the asset itself can't/doesn't configure audio interfaces - it has to be done externally by user using something like Soundflower on macOS / Virtualcable on windows, or similar
    this rules out mobiles completely btw since this is not even available there on OS level and from efficiency (sanity) point of view there's not much else except Audiobus-like interop worth to (even) consider (so, no browser - that'd mean (custom browser +) ~Javascript-Audiobus/AUv3 bridge which.. I..can't even write without thinking about disturbing something like Great Old Ones from their slumber)
     
  38. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,272
    @AndrewKlepcha there is issue with the script, but it's unrelated to audio itself [it saves to StreamingAssets on iOS - incorrectly - I've updated it to follow current state of things on iOS, Android and WSA based on https://docs.unity3d.com/ScriptReference/Application-persistentDataPath.html]

    But the audio itself is OK - the file is PCM16 with 'iOS' samplerate of 24 kHz and plays normally
    I'm not sure what might cause problems with this - saving itself is done via BinaryWriter on the file which should be sufficiently cached, so I recommend checking if something doesn't steal the scene performance e.g. using the profiler, maybe test on different device too to rule out hw issue
     
  39. kotor322

    kotor322

    Joined:
    Mar 27, 2017
    Posts:
    7
    Hi @r618 . I update plugin and FMOD to latest version. While music played in audiostream and I used OpenfileDialog audio just cycled last 1-2 seconds of sound. I need use pause to stop it and unpause when I close OpenfileDialog. In Legacy audiostream i don't have this problem. if there is a possibility to use audiostream without stopping it?
     
  40. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,272
    Hi @kotor322 I know about this - I will replace existing main streaming/playback component, I'll let you know as soon as something is ready but it will be few months still (at least I'm not saying soon right)
     
  41. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,272
    btw how are you opening the file @kotor322 ? I suppose you're using native/OS file open dialog - is that correct ? can you point/link me how exactly you're doing it ?
    it might be possible to invoke it on its own separate thread in order to not block the main one but i'd have to test to see if it's possible

    as an another example / workaround there is e.g. [ https://github.com/yasirkula/UnitySimpleFileBrowser ] ( there is asset store package too ) which can be used as part of unity UI (so this situation is not even applicable)
     
  42. kotor322

    kotor322

    Joined:
    Mar 27, 2017
    Posts:
    7
    Hi, i am using it for PC build with: https://github.com/gkngkc/UnityStandaloneFileBrowser:
    var paths = StandaloneFileBrowser.OpenFilePanel("Open File", "", "", false);
    and tried with async method:
    StandaloneFileBrowser.OpenFilePanelAsync("Open File", "", "", false, (string[] paths) => { });
    and had the same problem in all variants in build and editor.

    Also i tried EditorUtility.OpenFilePanelWithFilters(EDITOR_HEADER, Application.streamingAssetsPath, EXTENTIONS); and it's causes problem too.
     
  43. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,272
    Thanks !
    so yes this solution uses native open/save file dialog wrapper and that means that calling thread blocks completely - you can see it also on screenshots in repository description where e.g. UI buttons are in active and pressed state while the dialog is opened which means that the main thread isn't running
    This means that your whole game is completely frozen while the dialog is displayed, not only audiostream playback - so I strongly recommend not using this; - see also reasons below -

    I tried to invoke native dialog on Windows on a separate thread but only half of this works ( effectively meaning not at all -)
    - it's possible to display it via something like
    Code (CSharp):
    1.     public AudioStream.AudioStream audioStream;
    2.     string[] audioFilepaths = null;
    3.     System.Threading.Thread openFileThread;
    4.  
    5.     void OnGUI()
    6.     {
    7.         if (GUILayout.Button("Load a file"))
    8.         {
    9.             StartCoroutine(this.SFBOpenFilePanel());
    10.         }
    11.     }
    12.  
    13.     IEnumerator SFBOpenFilePanel()
    14.     {
    15.         if (this.openFileThread != null)
    16.             yield break;
    17.  
    18.         this.openFileThread = (new System.Threading.Thread(new System.Threading.ThreadStart(this.SFBOpenFilePanel_Thr)));
    19.         this.openFileThread.Start();
    20.  
    21.         // !null idicates dialog was opened and finished
    22.         while (this.audioFilepaths == null)
    23.             yield return null;
    24.  
    25.         this.openFileThread.Join();
    26.  
    27.         if (this.audioFilepaths.Length > 0)
    28.         {
    29.             this.audioStream.url = this.audioFilepaths[0];
    30.             this.audioStream.Stop();
    31.             this.audioStream.Play();
    32.         }
    33.  
    34.         this.openFileThread = null;
    35.         this.audioFilepaths = null;
    36.     }
    37.  
    38.     void SFBOpenFilePanel_Thr()
    39.     {
    40.         this.audioFilepaths = SFB.StandaloneFileBrowser.OpenFilePanel("Open File", "", new SFB.ExtensionFilter[] { new SFB.ExtensionFilter("Audio", "mp3", "ogg") }, false);
    41.     }
    - it will display, not block, a file can be picked and played etc, but this will cause the domain not to be able to unload afterwards - meaning that each script change/recompilation, or exiting Unity will freeze the Editor
    I'm not sure about the exact reasons, but I was not able to make it work on 2018 LTS - I suspect it's maybe something in winforms implementation the asset uses, but I didn't investigate further
    So I strongly recommend using other Unity UI based solution if possible
    I am working on main component replacement which will be decoding on separate thread, but even when that's ready, blocking the whole app/game otherwise is not very user friendly regardless
     
  44. mrst003

    mrst003

    Joined:
    Jul 22, 2020
    Posts:
    4
    Hello! I would like to ask you for some help with a Scene in AudioStreamNetMQSourceDemo/AudioStreamNetMQClientDemo.
    In AudioStreamNetMQClient, it looks like OnAudioFilterRead is playing a sound, but I would like to turn the received data into an AudioClip.
    From reading the documentation, I thought it might be possible, but I don't understand the exact process.
    Could you tell me how to do it?
     
  45. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,272
    Hi, that's more of a question about audio scripting - you can use PCMReaderCallback [https://docs.unity3d.com/ScriptReference/AudioClip.PCMReaderCallback.html] instead;
    see also https://docs.unity3d.com/ScriptReference/AudioClip.Create.html
     
  46. Daniellb12

    Daniellb12

    Joined:
    Mar 7, 2018
    Posts:
    1
    So random question since I don't see it in your write up, but figured I'd still ask since I can't find anything about this anywhere.

    Is it possible to use AudioStream to access the raw audio stream of the Oculus Rift microphone and do processing on it to figure out the direction of the specific sound in space? Random and very specific question I know, but since you work with audio significantly more than me figured I'd ask.
     
  47. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,272
    you can ask no problem, the answers are D
    you can get final PCM data coming from the mic, the asset provides it in Unity friendly way - as for all supported inputs
    I don't know of any spatialization technique which uses solely PCM audio stream data to determine position / direction of the source though - for that you usually need to know object's (listener and source) positions / transforms in space
    The asset has basic "3D microphone" scene [ResonanceInputDemo] where the mic is being played in 3D via Resonance
    (note it doesn't use full resonance capabilities such as coloring the sound w/ audio materials or occlusion - for that you'd need proper/full Resonance package)
     
  48. MohHeader

    MohHeader

    Joined:
    Aug 12, 2015
    Posts:
    41
    Hi,

    First thanks for this nice plugin :)

    I am having an issue, and not sure exactly how to fix it.

    I play an audio file, then seek ( manual adjust PositionInSeconds once ) to a specific point let's say two, three minutes a head,

    a part of second from the audio, is just repeated a lot of times, then after a while, it just continue as normal,
    I think it is something related to buffering.


    I would like the audio to not play, but wait until enough buffer is loaded, then play, instead of that (annoying repeat effect)

    Tried to adjust both: blockalign & blockalignDownloadMultiplier
    but couldn't reach any good results, I think it just controls the first play, not when manual adjust PositionInSeconds

    :)

    BTW, I also found my self in need to edit the plugin code it self, as I wanted to add a header to the HTTP request,
    if you can add it to the plugin will be great

    I did something like the following:

    Code (CSharp):
    1. namespace AudioStream
    2. {
    3.     public abstract partial class AudioStreamBase : MonoBehaviour
    4.     {
    5.         Dictionary<string, string> WebRequestHeaderDict;
    6.         public void SetWebRequestHeader(string key, string value)
    7.         {
    8.             if (WebRequestHeaderDict == null)
    9.                 WebRequestHeaderDict = new Dictionary<string, string>();
    10.             if (WebRequestHeaderDict.ContainsKey(key))
    11.                 WebRequestHeaderDict[key] = value;
    12.             else WebRequestHeaderDict.Add(key, value);
    13.         }
    14.     }
    15. }
    and add the following before calling SendWebRequest
    Code (CSharp):
    1.  
    2. foreach (var item in WebRequestHeaderDict)
    3. {
    4.   www.SetRequestHeader(item.Key, item.Value);
    5. }
    6.  
     
  49. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,272
    Hello, thanks !

    I mean this position manipulation really isn't suited for arbitrary jumping around streamed media (should work without issues on local files) - it would have to be a new feature to account for such large position shifts
    for now I think you could either:
    - call .Pause() on the component - currently this pauses the channel (playback) but downloading should continue - and then move position before unpausing at desired time
    [I probably can't pause download/webrequest automatically since it will most likely time out]
    - or just mute playback (set volume to 0) and unmute when needed

    That said I hope to release just (modified) player component in the near future that could behave more like what you want - though most likely it will probably just allow you to seek within existing playable data which makes more sense I think - but I'll keep that on mind

    Thanks for suggesting custom HTTP headers !
    I'll be sure to have this in there, makes perfect sense )
     
  50. MohHeader

    MohHeader

    Joined:
    Aug 12, 2015
    Posts:
    41
    Thanks for your answer,

    I found that I already Pause, and then Resume after new position is set, currently with a delay of 0.25 of a second.

    But is there any flag, that I can use, to know that audio is ready to be resumed?
    i.e. there is enough buffer

    so instead os a delayed resume of 0.25, I just wait until there is enough buffer, then resume it again.

    I will check the code, if this flag do exist, but thought to add the question here too

    Thanks again for your prompt reply