Search Unity

AudioStream - {local|remote media} → AudioSource|AudioMixer → {outputs}

Discussion in 'Assets and Asset Store' started by r618, Jun 19, 2016.

  1. blaher

    blaher

    Joined:
    Oct 21, 2013
    Posts:
    80
    Sounds good. I already have my spectrum data code done anyway, so using Unity's audiosource is fine as long as that isn't what is adding a bunch of latency to things. Reading in FMOD forums, it looks very reasonable to get less than 10ms latency from recording a source. Some claim even lower using ASIO (FMOD_OUTPUTTYPE_ASIO). I tried changing the outputtype to FMOD_OUTPUTTYPE_ASIO in your AutioStreamIn file and unfortunately it loads up ASIO4ALL which I have on my system for Guitarrig... I thought maybe it had an ASIO driver built into FMOD. ASIO4ALL has issues that I don't know how to get around. namely that it requires exclusive access, so you can't have a browser running playing audio from spotify... if it is then ASIO4ALL won't even list that as an option to use.
     
  2. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    fortunately, it isn't - the processing costs something since unity's audio is touched, but it is noticeably less intensive than what is now in the current version
    the other option would be to bypass it altogether, but I don't have that ready yet: so in the end there might be two 2D latency friendly input components - remains to be seen.
    as for the ASIO drivers - yeah, FMOD just uses whatever is installed on the system *in this case, but thanks for pointing it out - I might look into it.
     
    Last edited: Jun 22, 2017
  3. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    sent PM @blaher / mentioning here FWIW - sometimes I don't get notifications

    Xeers
     
  4. ForceX

    ForceX

    Joined:
    Jun 22, 2010
    Posts:
    1,102
    Does FMOD support FLAC or M4A/AAC formats. I would like to add an ingame music player that can playback your own music. I'm using a custom built Unity VLC front end right now but would love to keep it 100% in unity.
     
  5. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    @ForceX
    FLAC is supported ( see also https://en.wikipedia.org/wiki/FMOD#File_formats ), AAC ( .caf or .m4a ) should be OK on iOS (note: _only_ on iOS) according to FMOD forums.

    As for the player idea - yes that's a supported scenario, just point AudioStream/Minimal component's url to a full local file path, and you'll get events when e.g. playback is finished
     
    Last edited: Jun 23, 2017
  6. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    Recording setup on Windows for capturing currently playing audio on the system, a.k.a.
    "I want to capture the sound being played on Spotify and do some visualization in Unity based on it"
    setup :

    - configure your recording devices similar to this
    ( sorry about the screenshot resolution; windows is stupid enough to not allow me to alt+printscreen/snip it with context menu opened so I had to make selection from the whole desktop screenshot and scale it up ):
    RecordingConfig_Windows.PNG

    the 'Stereo Mix' device is dependent on concrete sound card / drivers, so your concrete system might differ, but notice the 'Show Disabled Devices' and 'Show Disconnected Devices` - these should be enabled otherwise you might not see the mix device.
    Note: it's NOT advisable to enable loopback ('Listen to this device' ) in properties of this device.

    Once the device is enabled you can point AudioStreamInput to record from it:
    AudioStreamInput.PNG


    ( you can enable / disable output with 'Listen to device' checkbox here, coming in 1.5.2 )
    Latency of AudioStreamInput is rather high, more latency friendly AudioStreamInput2D component is coming in 1.5.2






    note: playback config is left untouched, for example in my case is as usual:

    PlaybackConfig_Windows.png
     
    Last edited: Jan 16, 2018
  7. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    Also forgot to mention - the way AudioStreamInput2D is currently built allows audio data to be passed from external source / native plugin, so on mac it is possible to use e.g.https://github.com/keijiro/Lasp with neat latency around 7ms
    Don't have Windows capture plugin ready, but the process is similar.
     
  8. blaher

    blaher

    Joined:
    Oct 21, 2013
    Posts:
    80
    hmmm. I don't have a stereomix device (gigabyte mobo w built in audio, realtech I think the chipset). yes show all and disabled devices is checked. I also looked on three other machines I have around here and only one has such a device. I don't this this is a very good solution for targeting 'any ol gamer' to request/require then to setup a recording device instead of just grabbing the windows default playback device output. AudioStreamInput does show all playback and recording devices for me and allows grabbing any output.
     
  9. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    it might be called differently - do you get _any_ other than commonly visible devices ?

    the windows default playback device has to support this - it is done via the mix interface - it must be also supported by the driver - otherwise you can't simply tap into the audio output as it is; - i dont know what the current state of windows media api is and if some lower level access would help - but I doubt it is as simple as opening a channel

    It shows the enabled ones from sound recording pane as shown above - i.e. those which can be opened for recording

    'any ol game' would have to do it - there's no other way - you can automate this btw by using e.g. AutoHotKey and/or some command line / power shell scriptin

    provided the interface can be enabled, and chosen, of course; that you don't have any is rather surprising for me to be honest, I didn't expect reasonable modern driver wouldn't have it - but it is apparently not as widespread as I thought; you might check vendor pages for drivers btw
     
    Last edited: Jun 25, 2017
  10. blaher

    blaher

    Joined:
    Oct 21, 2013
    Posts:
    80
    got a chance to try the newest build you sent and actually AudioStream2D, like the one before does show all my windows playback devices and is able to pull output from any of them. I just set it to my speakers and it works like a charm. It seems to have a little lag but not terrible.... not sure how to tell exactly how much but my eye says a really slight amount. usable for sure. I'll test on other systems.
     
  11. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    cool, good to hear
    but the lag won't get better on different machine unfortunately - to go lower Windows version of Lasp is needed
     
  12. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    Managed to lower latency by few ms
    Here is some demo video of the new AudioStreamInput2D component coming in 1.5.2 capturing system audio output via a mix device with a crude visualization

     
  13. blaher

    blaher

    Joined:
    Oct 21, 2013
    Posts:
    80
    out of curiosity, how are you measuring latency? Using what you sent me, I'm seeing a good bit of latency but not sure how to tell you the exact amount. Also, I noticed a large discrepancy between testing day to day.. as if on my windows machine there is more/less latency depending on what all is running or the weather outside :)
     
  14. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    im not measuring it exactly - see the video to get the feel - but that has improved component compared to the version you have
    _don't_ use the redirection component btw - it adds its own latency on top
     
    Last edited: Jun 28, 2017
  15. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    btw as i found the whole setup is somewhat fragile on windows :
    the mix device received signal only when certain default output device is set - in my case it was headphones output - when monitor built in speakers were selected as system default, the mix device didn't work at all
    you might want to experiment to find out what is working on your particular device / drivers configuration
    there is no easy generic way to set up these things, unfortunately

    after changing system default output it is also advisable to restart unity - at least for older versions
    ( newer versions since like 5.4/5 (?) seem to pick up the change )
     
  16. Divistri

    Divistri

    Joined:
    Oct 1, 2015
    Posts:
    9
    Hello! I'm interested in buying this asset and was curious on its functionality. I'd like the user to be able to select any music file they have stored locally on their computer and have Unity play it through an Audio Source. Would this asset allow that? Also, would it allow Unity to stream audio from SoundCloud and Spotify?
     
  17. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    Hi @DivinityStripes -

    - for local files: pass a full accessible local filesystem path as .url and it will be happy ( as long as format is supported )
    - you can use normal AudioSource, or you can bypass it completely if you want, streaming output directly to audio output when features of unity's AudioSource such as 3d position, mixing, or effects are not needed.
    - all the streaming components care about is a url poiting to file/stream/playlist, so as long as url points to a valid audio it can be streamed -
    in case of SoundCloud this should work according to one of our first adopters, Spotify might be problematic: they need oauth token in request header which there is no way of supplying from AudioStream currently, unfortunately. Otherwise with a direct url you can access only 3o sec preview via their Web API.
    ^ at least that's what I've understood from their documentation, but I'm not exactly a web developer so I'm not entirely sure whether it's possible with common http/s request or not. ( On iOS and Android I recommend rather using their (Spotify) SDK for each platform.)
    You might have better luck with Unity's UnityWebRequest/UnityWebRequestMultimedia - depending on its current state - where there you can customize request header, only last time I checked it was not possible to set some kind of a streaming flag for audio so it would work only for downloading entire audio clips in full.
     
    Divistri likes this.
  18. EmeralLotus

    EmeralLotus

    Joined:
    Aug 10, 2012
    Posts:
    1,462
    Great project, I'm interested in making DSP apps and require the least amount of latency. Are there plans for supporting Lasp functionality on other Unity Platforms in a release in the near future? windows, IOS, Android
    Cheers
     
  19. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    Lasp is a word, as they say in the matrix ;) but seriously : its just a common native plugin, so it's 'just' a matter of writing each respective native part
    personally, I have but no intention of doing that right now, unfortunately
    technically it's just having retrieving audio data in AudioStream similar to this:
    https://github.com/r618/Lasp_Old/blob/master/Unity/Assets/Test/GenericAudioInput.cs#L64
     
    Last edited: Jul 15, 2017
  20. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    V 1.5.2 072017 seems to made it ( thanks to asset store people for very quick review )

    - [Advanced] setting on AudioStream allows user to set "Stream Block Alignment" as workaround for audio files with unusually large tag blocks - typically e.g. mp3's with embedded artwork
    - new low latency 2D input component AudioStreamInput2D + demo scene
    - AudioSourceOutputDevice allows chaining of audio filters
    - AudioSourceOutputDevice further fixes for empty clip on startup
    - new event on stream tags/track change and better stream tags handling in general
    - new GOAudioSaveToFile component allows automatic saving of audio being played on GO
    - reorganized project files with more logical grouping (before upgrading it's probably good idea to delete existing version first)
    - updates to documentation && in Editor texts
     
  21. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    Small update:

    Seems like I was able to properly use Google VR plugin present in FMOD Unity package for playback of common and ambisonic audio, after all.
    If all goes well I'd like to include this in 1.6.
    - you'll need FMOD 1.09 (current) for this ( I will probably reuse their room definition and updates, too, but was not entirely successful with it so far )

    This should properly spatialize signal fully in 3D, and with low latency, too. Will be constrained to fmod sound ( i.e. like AudioStreamMinimal ) only at first probably.

    For ambisonics files playback and support please look at just released 2007.1.


    I'll probably up the price a little bit with 1.6 release, so grab this now if you're interested :)
     
    Last edited: Jul 21, 2017
  22. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    keijiro's audio input plugin based on PortAudio https://github.com/keijiro/Lasp now supports both x64 windows and macOS
    I recommend checking it out if you'd like very low input latency
     
    wetcircuit likes this.
  23. blaher

    blaher

    Joined:
    Oct 21, 2013
    Posts:
    80
    awesome. thanks, I'll check out lasp
     
  24. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    Some support for FMOD plugin provided GoogleVR spatializer for FMOD channels (not all customizable parameters are shown; and I'm not yet sure what else (all) to include in fact):



    Will take some time to expose everything properly in the Editor. Nevertheless, the GVR spatialization itself is working correctly with GameObjects in the scene .
     
  25. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    tried ambisonic support in 2017.1, too:
    - Oculus provided OculusNativeSpatializer in their SDK seems to be OK
    - I had less luck with Google VR SDK for Unity, where their Ambisonic Decoder Plugin couldn't be registered in the Editor - I suppose it's a bug, which will get fixed eventually

    But the Google VR provided by FMOD package used by AudioStream works without problems, ironically, successfully playtested b-format test audio from http://www.ambisonictoolkit.net in the Editor, and Windows and macOS builds.
     
  26. moh_mah_noureldin

    moh_mah_noureldin

    Joined:
    Jul 26, 2017
    Posts:
    1
    Hi, I am developing a standalone Windows application with Unity 5.3.8. The application downloads an AudioClip and passes it to SALSA lip-sync to animate an avatar. My problem is: I want to be able to choose the output device at runtime but i did not create the AudioSource myself and I am not sure how SALSA work internally.
    My Question would be if I use the AudioSourceOutputDevice to redirect the audio to a specific output device, Will SALSA use this device instead of the default one? Is it a global setting or I have to configure it with each AudioSource component (the one created internally by SALSA)?

    Here is a sample of my code
    Code (CSharp):
    1. public Salsa3D salsa;
    2. ...
    3. WWW www = new WWW (url, postData, headers);
    4. yield return www;
    5. AudioClip audioClip = www.GetAudioClip (false, false, AudioType.OGGVORBIS);
    6. salsa.SetAudioClip (audioClip);
    7. salsa.Play ();
     
  27. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    hi @moh_mah_noureldin - you can verify whether SALSA uses normal Unity audio for example by attaching (empty) MonoBehaviour on the GameObject you now have, with OnAudioFilterRead implemented - can be with empty body - and observing audio levels when played in the inspector -
    they are these bars:

    if the volume bars are responding while SALSA clip is being played, then AudioSourceOutputDevice can use and redirect the signal
    but beware there will be latency introduced ( it needs its own buffer, too ) - so you'd probably have to work around by it by syncing audio and animation manually /
    if not, then SALSA uses its own audio subsystem - which I find but unlikely

    The way component works is it uses AudioSource, resp. OnAudioFilterRead audio buffer, and passes it to FMOD - each component is configured individually - that means each playing SALSA source can be played on different device if needed ( and if the above is applicable ).
     
  28. Adkaros

    Adkaros

    Joined:
    Dec 14, 2015
    Posts:
    16
    Hello,

    I bought this plugin and asked a question a few months back, it worked great!

    I have some audio needs again for another project and I need to output audio to 4 different speakers, does the audio output device component allow up to 4 different outputs? Haven't tested yet.

    Also, is there any way to play the audio on different channels? I'm not sure how to do this with unity alone but I am messing with the spatial blend and positioning around the audiolistener. But was wondering if AudioStream has a component that would allow this, didn't see anything but doesn't hurt to ask!

    Thanks again!
     
  29. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    hey @Adkaros , good to hear!

    there is Speaker Mode available on each component where you can choose i.e. QUAD speaker configuration, but I'm not entirely sure it works correctly for AudioSourceOutputDevice component looking at code right now ( it might actually mimic Unity's selected audio config depending on what version you use )

    You can choose it at any rate to try to find out - I recommend trying first without redirection on default device to see what is working / what is not.

    If by channels you mean spatial effect then I'm not entirely sure what FMOD will do with it - but it should provide proper effect for user on all 4 speakers - if you try out with Unity spatializer, please let me know.

    And just heads up: I'm using GoogleVR provided by FMOD for full 3D spatialization in next version, including ambisonics, so chances are it might solve this, if current solution won't work correctly.

    Feel free to PM me your project configuration in any case if you hit some obstacle, I'll look into it
     
  30. Adkaros

    Adkaros

    Joined:
    Dec 14, 2015
    Posts:
    16
    Thanks for the quick response!

    I've gotten two of the speakers working by panning it L/R and sending to default device (Channel 1-2 output)

    Now I am trying to send another L/R but now I need to use audio output device component to 3-4, although when I use the component, the audio keeps cutting in and out. When I try and play a sound clip I just get white noise/static.

    Any ideas here? Not sure if maybe I missed a project setting, or maybe FMOD setting.

    Edit: Nvm! figured it out- had default speaker mode set to surround 7.1 instead of default, now it works! Sorry for the bother!
     
    Last edited: Aug 2, 2017
  31. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    thanks for letting me know
    I still think there is probably resampling missing for redirected output, I will have to have a closer look
    Also will have to think where/how to reflect selected Speaker Mode properly, and on which component; it's on two places right now when using redirection with stream itself
    Glad you got it working !
     
    Adkaros likes this.
  32. sonicviz

    sonicviz

    Joined:
    May 19, 2009
    Posts:
    1,051
    Hi,

    >>NOTE: Support for full 3D and ambisonic audio using GoogleVR coming soon(tm)! Last edited: Jul 25, 2017

    Do you have a rough timeline on this support?

    ty.
     
  33. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    Hi there @sonicviz!
    I hope to sort things out next week, and possibly submit it, too.
    If you purchased already, PM me - I can provide you with alpha build with basic GVR stuff working.
     
  34. sonicviz

    sonicviz

    Joined:
    May 19, 2009
    Posts:
    1,051
    I just purchased it and sent you an email.

    Do I need to keep the FMOD GoogleVR folder?
    If I do and delete other folders as recommended by the readme I saw some errors.
     
  35. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    Thanks!

    No, just content in Plugins is needed - GoogleVR folder is inteface to FMOD Studio project.
    readme might be not very clear I guess - but you need only native plugins, and Plugins/FMOD/Wrapper/*.cs sources from FMOD integration
     
    Last edited: Aug 26, 2017
  36. lauraaa

    lauraaa

    Joined:
    Dec 30, 2016
    Posts:
    16
    Hi @r618, I'm interested in the asset and would like confirm some functions before making the purchase :)

    1. It seems it's possible to route any specific AudioSource to any non-default Audio Device, is it correct? And can the player still hear that AudioSource? I'd like to extract one specific AudioSource to external speaker, but still allow it be heard by the player through headphones, along with other gameplay audios.
    2. Would this function support GvrAudioSource?

    Thank you in advance!
     
  37. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    hey @lauraaa,

    that's its purpose, indeed :)

    I've added option to un/mute original signal in latest version, which allows playing it on default output device alongside of chosen other one; by default it's on since it's not needed very often; so, yes - you'd uncheck the 'Mute After Routing' checkbox and you're good to go.

    As for GvrAudioSource - that's bit more complicated:
    as you mentioned the redirection works currently with common Unity's AudioSources with AudioSourceOutputDevice component, but another - separate from Unity audio - 'minimal' set of components which use FMOD more directly can redirect happily without it by setting desired output driver id directly.
    GvrAudioSource is (in this case unfortunately) currently one of those components. - so you will be able to direct signal to whatever output you want with GvrAudioSource, but it wouldn't be possible to play it simultaneously on more than one device :/

    However, it is possible to have two instances each with their own output, and in case of local files everything is in sync enough - this is much more less reliable should they be playing the same network stream, though.

    I will probably think about adding proper Unity AudioSource support for GVR, which might come later in some point release.

    That being said, the whole GVR support is not yet released on the store - I'm preparing final changes and will submit 1.6 version very soon, alongside with fixes and further enhancements.

    I hope that helped nevertheless

    Cheers and good luck!
     
    lauraaa likes this.
  38. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    submitted
    V 1.6 092017
    - AudioSourceOutputDevice - added initialization based on currently selected output device sample rate
    - added support for PCM8, PCM24, PCM32 and PCMFLOAT stream formats for AudioStream ( Unity AudioSource ) - note: this enables also playback of wider variety of formats such as MIDI and modules audio files with AudioSource
    - added support for GoogleVR 3D spatializer on GameObjects via GVRSource and GVRSoundfield components.
    - currently playback via FMOD audio only ( no AudioSource )
    - room definition support, audio input GVR component, and integration with AudioSource will come later
     
  39. lauraaa

    lauraaa

    Joined:
    Dec 30, 2016
    Posts:
    16
    Thank you for the quick reply! Will give it a try!
    I think I'll go for (1) play GvrAudioSource on user (to achieve spatial effect), and (2) play AudioSource for speaker.

     
    Last edited: Sep 5, 2017
    r618 likes this.
  40. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    yeah, you can combine as necessary, you can also play two GVRSources for each, there's nothing wrong having 3d spatial signal on speakers
    - one GVRSource with output device id set to 0, the other to 1 ( or whatever the other device you have on system )
    .thing is AudioSource might have different latency, so if you use the same type there's better chance to get better sync with them
     
    Last edited: Sep 6, 2017
  41. lauraaa

    lauraaa

    Joined:
    Dec 30, 2016
    Posts:
    16
    Thank you for the suggestion! Will do that!

    I tested the redirect and it indeed works like a charm! :D Great asset!

    I have another question regarding playing the redirect audio. On the editor, for AudioSourceOutputDevice to work with non default device, I have to enable Auto Start, or it would play on device id 0 even it set the id to be 1 before entering play mode. But I don't want the audio to be really played on start, I think I need to play it with script.

    Instead of playing a longer audio, I plan to use it on short sound effect with audioSource.PlayOneShot(), so should I do as below--

    Code (CSharp):
    1. AudioSourceOutputDevice asod;
    2. GvrAudioSource as;
    3.  
    4. asod.SetOutput(1);
    5. asod.StartFMODSound();
    6. as.PlayOneShot( clip );
    and never call asod.StopFMODSound() until the application ends, cause the audioSource would be played multiple times anyway? Is this the proper way to do it, and would it hurt performance? Or what would be the most performative way to do it in your opinion?

    Thank you again for the advice, audio buffer stuff is like a wild jungle to me..
     
  42. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    hey @lauraaa,

    For general user calls please see 'Demo\OutputDevice\OutputDeviceDemo.cs' and please stick to public methods only - concrete implementation and calls might change in future - if you use public methods of the component you should be more safe :)
    Note also that it is advisable to wait for each component to init itself - see Start() method in the mentioned script which is an IEnumerator so it can 'wait' until everything is ready.
    - this is needed there to e.g. enumerate available audio outputs from the script.

    Now, in case of PlayOneShot things get - surprisingly - bit more complicated ( again ) - ( problem is there is no AudioClip present on the AudioSource and we get all kinds expected, but even unexpected behaviors as I found out ), however, I've prepared setup which should cover playing clips via PlayOneShot and multiple outputs - :

    use this script ( name it PlayOneShotWithOutputDevice.cs )
    Code (CSharp):
    1. using AudioStream;
    2. using UnityEngine;
    3.  
    4. public class PlayOneShotWithOutputDevice : MonoBehaviour
    5. {
    6.     public AudioClip clip;
    7.  
    8.     IEnumerator Start ()
    9.     {
    10.         bool ready = true;
    11.  
    12.         do
    13.         {
    14.             ready = true;
    15.  
    16.             foreach (var o in this.GetComponents<AudioSourceOutputDevice>())
    17.                 ready &= o.ready;
    18.  
    19.             yield return null;
    20.  
    21.         } while (!ready);
    22.  
    23.         foreach (var o in this.GetComponents<AudioSourceOutputDevice>())
    24.             o.StartRedirect();
    25.     }
    26.  
    27.     void OnGUI()
    28.     {
    29.         if ( GUILayout.Button("PLAY"))
    30.         {
    31.             this.GetComponent<AudioSource>().PlayOneShot(this.clip);
    32.         }
    33.     }
    34. }
    35.  
    and stick it to a GO with AudioSource :

    ,

    As you can see I've attached two AudioSourceOutputDevice components each with its own output, not autostarting, and only second one mutes the signal afterwards.
    This setup has another advantage in that both are playing almost perfectly in sync.

    This approach is also usable with AudioStream - the "radio" component - but that needs to be updated - currently it works only with exactly one AudioSourceOutputDevice component ( unfortunately ) - the change is minimal - you'd have to substitute GetComponent<> for GetComponents<> calls - let me know if you need it / and have problems changing it.


    I think that should be it - let me know if you have any further questions/issues :)

    Cheers!
     
    Last edited: Sep 7, 2017
  43. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    Oh and as for your performance question : it is perfectly fine to let the redirection be running - it just continuously samples Unity audio buffer and passes its content along - it has minimal impact in project.
    PlayOneShot is generally better for short clips, yes. but you could get away with several Audio(Stream)Sources, changing clip/url on them as and calling .Play() as needed without noticing, depending on what you're doing exactly in application.
    In general, be concerned about performance only when/if that starts being an issue ( use unity's profiles on actual build, not in editor, if needed ).
     
  44. lauraaa

    lauraaa

    Joined:
    Dec 30, 2016
    Posts:
    16
    Wow so much to learn, thank you very much for the tips!!
     
    r618 likes this.
  45. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    (updated the script above to actually sync with AudioSourceOutputDevice as it should ;.)
     
  46. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    update is live on the store thanks to very quick submission approval \ (•◡•) /
     
  47. swredcam

    swredcam

    Joined:
    Apr 16, 2017
    Posts:
    130
    It seems this asset allows access to the second channel of a 2-channel microphone input -- is that the case? In Unity to-date I have only been able to access the left channel. I need a sample buffer from each of the two channels, aligned in time. It does not need to be real-time so latency does not matter as long as both channels have roughly the same latency. Is this possible? Would I simply create two different objects, one for each channel, and then process in parallel?
     
  48. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    NOTICE: please stay at 1.09.08 version of FMOD Unity integration for the time being

    If you are downloading anew select 1.09.08 ( latest compatible ) at FMOD downloads page - do not use 1.10.00 which is selected by default.

    The FMOD C# API was changed - will have to update the package and submit new version in order to accommodate the changes ( will be incompatible with pre 1.10.00 FMOD Studio Unity integrations ).
     
  49. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    Hi @swredcam
    yes, this works if you have multichannel recording device

    E.g. if these are properties of a line in input:

    3 : Line In (IDT High Definition Audio CODEC) rate: 48000 speaker mode: STEREO channels: 2

    and you record via it using AudioSource enabled input component of AudioStream, you can access automatically created AudioClip's channels (in this case 2) separately as needed.

    ( look for channels property of AudioClip in documentation )

    NOTE: this should work with Unity's Microphone class as well - Microphone.Start creates a clip which should reflect the input source channel count - only in my current testing in older Unity version I was unable to start it for some reason - so this might be Unity version specific ( read either working, or not :0)

    hope that helps !
     
  50. swredcam

    swredcam

    Joined:
    Apr 16, 2017
    Posts:
    130
    Using Microphone.Start creates a single channel clip. That's the problem I'm trying to solve. For a 2-channel input device I need to capture both channels in parallel. If your asset provides this for sure then I will purchase it. Thank you!
     
    r618 likes this.