Search Unity

  1. If you have experience with import & exporting custom (.unitypackage) packages, please help complete a survey (open until May 15, 2024).
    Dismiss Notice
  2. Unity 6 Preview is now available. To find out what's new, have a look at our Unity 6 Preview blog post.
    Dismiss Notice

AudioStream - {local|remote media} → AudioSource|AudioMixer → {outputs}

Discussion in 'Assets and Asset Store' started by r618, Jun 19, 2016.

  1. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    You can be checking
    mediaCapacity
    - it will be increasing while playback is paused (first I thought about
    mediaAvailable
    but that's updated only when actually playing the audio..)
    Give it some reasonably large difference e.g. few kB (it's kind of hard to estimate how much since it's the size of encoded bytes)
    The state should be captured internally with 'starvingFrames' if the above doesn't work you can make that public and check for it to be == 0 (I removed starvation events some time ago they would probably be helpful here..)
     
  2. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    .. but block/align parameters are helpful to kick off decoder, so it should be helpful to start/unpause again only if the amount of available data (the difference of 'old' and 'new' mediaCapacity in the above) is somewhat larger
     
  3. mrst003

    mrst003

    Joined:
    Jul 22, 2020
    Posts:
    4
    I'm trying it based on the information you gave me!
    Thank you for taking your time responding to my question.
     
  4. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    update just submitted:

    2.4.4 012021 vaccine#0

    Updates/fixes:
    - GOAudioSaveToFile : corrected platform save location
    - AudioStreamBase : simple custom HTTP request headers support - call 'SetCustomRequestHeader(key, value)' on component before Play()
    - IL2CPP builds : added support for optionally disabling array bounds and null checks in audio loops via ENABLE_IL2CPP #define : please see 'Building with turned off IL2CPP runtime checks' in Documentation.txt
    ^ helps w/ runtime CPU usage
    - UnityWebRequest usage compatible with 2020.2

    - tested with FMOD 2.01.07
    - native plugins compiled against FMOD 2.01.07
    - should properly support and run on Apple Silicon
     
  5. allan-oooh

    allan-oooh

    Joined:
    Mar 29, 2019
    Posts:
    53
    Hello,

    I'm giving this plugin a spin and I'm not sure if it does what I want, or I'm just not using it correctly.

    Here's the problem I'm trying to solve:

    Load a finite music file (mp3, ogg, or aac) from the internet and play it (on an iOS device) as soon as possible with all audio controls available. This almost works using Unity's DownloadHandlerAudioClip configured to streamAudio, except I have the following issues:
    - can't seek the audio clip (even after the download is complete)
    - aac audio doesn't play at all until the download is complete

    I was thinking this plugin would be able to handle my use case but from my testing of the AudioStream component with a disk buffer type I have the following shortcomings:
    - ogg downloads the entire file before playback starts
    - aac audio doesn't play at all (I haven't tried using AudioStreamDownload which I assume would have parity with the native approach, but again can't play while downloading)
    - if the track loops (or ends and is played again) the file is downloaded again (creating an audio delay)
    - seeking is much slower then native - even when the entire clip is downloaded
    - seek sometimes fails silently (although the playhead "moves")
    - there doesn't seem to be any way to determine if a seek is in progress

    Is there a different way to use this plugin that would better suit my use case?
     
  6. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    @allan-oooh
    to at least partially get to the state you're describing I would have to add some option to progressively storing either decoded or original data as the file is being downloaded and play it simultaneously - I was thinking about this but it's not there, I'll think about it still
    [ e.g. AudioStreamDownload has optional cacheId which when entered is used to locate previously downloaded data and use that instead for playback without touching the network/original media - I guess something similar might work for this too for non changing finite files; - AudioStreamDownload can't be used for playback as it is now since it runs not in realtime ]

    - AAC can't be netstreamed by FMOD (it should play after it's downloaded though)
    - ogg limitation is there since I couldn't make FMOD to behave in streamed fashion using UnityWebRequest for ogg/vorbis format - if you don't require secure HTTPS transport you can try Legacy component for ogg/vorbis, though you'd have to add seeking (channel.set/getPosition) there

    There are more problems right now on iOS - now that I'm thinking about it I'm surprised stuff works in main AudioStream component since IL2CPP had problems with FMOD static instance callbacks - I consider iOS not properly supported by AudioStreamBase w/ callbacks since I added UnityWebRequest support some time ago
    - I'm working on player replacement which should use managed code only and mp3/vorbis only - to cover UWR + platforms more easily, but it's not there yet
    As for seeking - this is not fully implemented and relies on internal FMOD playback buffer only currently which is less than optimal for sure and would have to be fixed - but I'm bit of reluctant here since there should much better way of controlling this in the new player I mentioned above
     
  7. jGate99

    jGate99

    Joined:
    Oct 22, 2013
    Posts:
    1,952
    Hi @r618
    Great Plugin, Doest it have a podcast sample?
    If not, how'd i use it to play a podcast? does it have a podcast feed parser or what
    A high level view please
    Thanks

    Update:
    I have transistter fm feed
     
    Last edited: Jan 24, 2021
  8. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    @jGate99 ,
    it needs 'final' media location (i put final in quotes since UnityWebRequest can e.g. follow redirects, hopefully you get the idea)
    it doesn't have RSS/podcasts feeds parser included

    I worked with RSS/Atom/JSON feeds previously though - it might be worthwhile to have a look how current podcasting is mainly disseminated to see if it's feasible to add something like that - so a good suggestion ~

    I suppose you meant transistor.fm feed : I examined a random podcast feed they had on main page and it looks like their RSS content is subject to common ATOM format (with some added extra fields for itunes, googleplay,...), the direct media/mp3 link is in enclosure attribute
    I would probably look at .net's SyndicationFeed to see if it's able to parse it, but haven't tried it for some time..

    All in all I would have to implement whole podcast player for this which was not the point, but I'll think about it
    Hope this helps nevertheless!
     
    jGate99 likes this.
  9. jGate99

    jGate99

    Joined:
    Oct 22, 2013
    Posts:
    1,952
    Thank you for your detailed reply, yes i was able to see the feed and i can provide the link to your plugin with ease now
    Thanks for building such a feature rich plugin
     
    r618 likes this.
  10. allan-oooh

    allan-oooh

    Joined:
    Mar 29, 2019
    Posts:
    53
    Thanks for the information. I'll stick with my custom solution for now. If Unity exposed just a touch more of the audio processing underlining DownloadHandlerAudioClip I would be golden.
     
  11. NikH

    NikH

    Joined:
    Dec 24, 2013
    Posts:
    14
    Hi there, I'm trying to simulate a small audio mixer in the Unity scene, with audio coming spatialised out of a speaker that is also in the scene. I have managed this, with the exception of being able to get traditional aux sends on several AudioSources sending to the input of another AudioSource (e.g. considering it as an Aux channel) to have a reverb applied prior to spatialisation. I can achieve this pre-spatialised reverb effect through simply inserting an audio filter such as reverb after each AudioSource. So, what I am wondering is, using AudioStream, can a split of several AudioSource outputs be sent to the input of a single AudioSource?
     
  12. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    hi @NikH , the plugin deals with inputs and outputs in 'master' sense only (i.e. what is available as device on the machine), it has no in app mixer routing per se; / it has some internal scripting support which could be used to script this, but I would have to verify how spatialization is affected - I would expect issues with it I think /
    If I got your question right though: have you tried using Audio Mixer for this ? e.g. you can have all sources to be sent to your Aux mixer group and have reverb on it (again not _entirely_ sure about spatialization here..- but I think it should work - if not there's probably no other option than to use something like Resonance which works as mixer effect properly)
     
  13. germanban

    germanban

    Joined:
    Jul 8, 2018
    Posts:
    12
    Hello, been a while (almost a year!) considering getting this asset for my project but I was afraid it would be too much for an audio noob like me. Still, I have seen that this is THE asset to get if you want to get split audio for local VR-PC games.

    Usually I try having a look at assets' documentation beforehand to have an idea of what I'll be dealing with but I think there's no documentation available for AudioStream before getting the asset (please correct me if I just didn't find it!) I've also searched round this thread and the reviews and found some hints about how to go about it.

    Anyways, I'm gonna have to add split audio to my game so I'm writting this preemptively to see if there's a documented or "proper" method to add what I look for with this asset:

    My game consists on 1 vr player vs 1-2 pc players (splitscreen if they're 2). Currently I'm limited by unity's single listener AND the inability of split audio for different outputs. Goal would be to have 1 listener for the VR player with output to the VR headset, and 2 listeners for the PC players that would be mixed "halfways" and output to whatever the PC player(s) are using.

    I think I have read that for the multiple audio listener I have to use something like the free "MultiAudioListener" package and then combine it with AudioStream to make each listener output to different hardware. Is that the way, or should I look for some other system for that? (or is it multiple audio listener built in on AudioStream?)

    So if that's the way to do it, I guess I'll have to add some in-game selection of outputs (just as in the multiple output demo).

    TL;DR: would it be viable for me to use this asset to get vr-pc split screen audio with up to 3 listeners (1 for vr, 1-2 for PC), on an existing project (2019.1.8f1), being a schmuck with no previous FMOD experience? Would I need to also get another package for the multi listener stuff and if so, what is considered to be the best one? Thanks in advance and sorry for the rambling and if this has been answered before.
     
  14. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    hi @Jorlman ,
    Yes, it's probably a good idea, I'll try to figure something out

    The asset can help you /only/ with outputting to different devices and it doesn't have its own multi audio listener
    Both of these are primarily made possible with Unity audio C# scripting (and one audio mixer plugin - which can be used independently), FMOD is used to access/manage physical devices

    So for screen split yea you would need to do it separately, asset can help you to pick and play AudioSources on selected output
    But you'd have to script everything else - the way this is done in general you can merge/filter/combine PCM signal in `OnAudioFilterRead` and/or 'PCMReadCallback' callbacks [possibly on more than one AudioSource] and pass it around/read from e.g. a combiner script from more sources
    ^ that's very high level conceptual overview; it has to be done rather carefully when spatialization is involved - stock Unity spatializer works better with PCMReadCallback (doesn't always work properly with OAFR IIRC at all stages)
    - I know people use Google Resonance for spatialization - that has different flow being an audio mixer plugin (but has also added benefit of being true 3D)
     
  15. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
  16. djirving82

    djirving82

    Joined:
    Feb 8, 2016
    Posts:
    8
    I purchased your AudioStream asset in the hopes of using it to send 3d audio from multiple listeners(players) to multiple audio devices. I like your asset, but after reading the docs repeatedly, and 8 hours of trial and error. I have concluded that it isn't possible. I'm trying to make a head-to-head arcade style game with two controllers, two monitors, and two 3d audio devices, one for each player. If I am missing something on how to do this, please let me know. Do you foresee this being possible in the near future? Has anyone pulled this off in Unity? Thank you for your time, and a great asset.
     
  17. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    hi @djirving82 , if I got this right something like this will send spatialized sound to an output device ( you can also use audio mixer groups, with AudioSourceOutputDevice component/script removed):
    Screenshot 2021-02-12 060131.png
    have something like this for both players with different outputs and their spatialized audio will be heard on each
    but note that this is from point of a single listener (main camera), what I guess you need is proper split screen - which the asset doesn't have - some was mentioned earlier in the thread, you'd probably have to do some scripting to make them work together, the component itself just takes whatever is being played on the audio buffer and sends it to configured output so you can modify/update it before that

    ^ the above is assuming single machine; current networking is limited to LAN and single stream direction; I hope to have something more Mirror compatible, but no ETA
     
  18. germanban

    germanban

    Joined:
    Jul 8, 2018
    Posts:
    12
    Well that was a coincidence, hah

    I've been carefully looking at the documentation and trying to soak up all the audio stuff I can before diving into this because it seems it could be quite the time consuming thing to add to the project (and I don't wanna spam this forum with questions before I'm sure I've thoroughly looked at everything I could)

    So, @djirving82 , it seems that the only other choice other than using AudioStream you (we)'d have for what you're looking for would be to implement WWise; there's some documentation about it going around but WWise is scary just to look at, and still, split listener split output solutions there seems a bit hacky.

    You might want to look about implementing MultiAudioListener and make it work with the audiostream audio source output device. Now, if I'm not mistaken, that asset works by actually duplicating audiosources, and Unity has a limit of simultaneous audiosources that would get halved by using that (if you're using 2 audiolisteners). I'm also scared of potential performance hits that could bring. After I finish some other things higher on the priority list in my game I'll come back here tho.
     
  19. djirving82

    djirving82

    Joined:
    Feb 8, 2016
    Posts:
    8
    Yeah, I got it as far as your above example, but I need individual listeners on the players themselves. I will see if I can find scripting answers to this. Thank you for your time.
     
  20. djirving82

    djirving82

    Joined:
    Feb 8, 2016
    Posts:
    8
    I am not using that specific multiaudiolistener, but I am using another that does the same thing, and it does not work with AudioStream as far as I can tell. I will try the one you linked just in case. If I have any success I will report back.
     
  21. sandy999

    sandy999

    Joined:
    Nov 22, 2020
    Posts:
    1
    Hi!

    Looking into this for a project in my studies. Is it possible to use this asset to input audio from a DAW or soundcard into Unity for sound visualization? I read that ASIO is not supported and also saw someone tried to get audio from a DAW earlier in this thread, but I couldn't find if they managed to find a feasible solution. I'd like to know if it's possible before I buy it.

    Thank you for your efforts.
     
  22. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    hi, please download and run demo app (dl links on the store page) and run AudioStreamInput/2D scene
    your system output speakers should be visible there
    / you might need to use soundflower or equivalent on macOS
     
  23. jamesjenningsard

    jamesjenningsard

    Joined:
    Jun 26, 2020
    Posts:
    6
    Hi - I apologize preemptively, for my incoming novice question.

    I would like to add a text input object, where the Input Field's "On End Edit (String)" augments the "Url" input under [Source] for an object with the "Audio Stream (Script) Object.

    However, in the Unity UI, other useful Functions show up (Play, Stop, ect) but I can't identify how to augment the Url field.

    Again - apologies as this is probably a novice question, and I might be using the wrong vocabulary.

    Thanks in advance for anyone who might be able to help.
     
  24. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    hi @jamesjenningsard - you are correct - the field is missing from automatically generated recipients in UGUI dynamic targets menu ( due to how Unity serialization system works )
    there are generally two ways to do this right now, you can either
    - write an event handler in your class and assign audiostream.url field in its body so something like
    Code (CSharp):
    1. public void SetUrl(string url)
    2. {
    3.     this.audiostream.url = url; // audiostream is reference to AudioStream GO in the scene
    4. }
    then pick this event handler on your runtime GO ( not on AudioStream GO )
    or you can add
    Code (CSharp):
    1. public string url_UI { get { return this.url; } set { this.url = value; } }
    to AudioStreamBase.cs around line 398
    - url_UI will then be pickable in the 'Dynamic string' targets of the AudioStream runtime GO in the UnityEvent on 'On End Edit'
    (the code will just forward the value to actual .url field of AudioStream GO):
    upload_2021-3-18_14-56-7.png

    Hope this makes sense!
    I should probably think about this and make something more user friendly for UI
    Anyway let me know if you're able to do this ^
     
  25. jamesjenningsard

    jamesjenningsard

    Joined:
    Jun 26, 2020
    Posts:
    6
    Hi friend!
    The first one, I couldn't wrap my brain around (more learning to do, on my end).

    The second one worked like magic.

    Thank you so much! Really appreciate it.
     
  26. MaxWalley

    MaxWalley

    Joined:
    Nov 15, 2019
    Posts:
    2
    Hi - I'm interested in your plugin for placing different audio channels of a single device in different areas of the Unity scene. Would your plugin be able to support this? I downloaded the test project and when I connected my interface to the AudioStreamInputDemo an error occurred saying I too many channels. Is this solvable? I am using 196 input channels.
     
  27. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    replying vie email..
     
    MaxWalley likes this.
  28. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    FWIW thought I mention this meanwhile -
    got initial Mirror support working, it's 'podcast quality' at best so far, so it's not there yet
    will tend to it as time permits, somehow I've been swamped with other work and can't get to it properly o/
     
  29. LostPanda

    LostPanda

    Joined:
    Apr 5, 2013
    Posts:
    173
    hi @r618 now is support asio?thanks
     
  30. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
  31. LostPanda

    LostPanda

    Joined:
    Apr 5, 2013
    Posts:
    173
    r618 likes this.
  32. ProjectileVomitTV

    ProjectileVomitTV

    Joined:
    Aug 12, 2020
    Posts:
    30
    Hi, I'm thinking about buying the AudioStream asset, but just wanted to check it fits my use case first.

    I'm making a virtual production app in Unity. There are two microphones coming into unity. One is a presenter in the studio, the other is a remote contestant. The presenter needs to be able to hear the contestant but not themselves, and the contestant needs to be able to hear the presenter but not themselves. Everything needs to be sent (via something like Loopback) to OBS for streaming live. The mics need to be fed into unity so they can be spatialised according to the positions of the presenter and contestant in the unity scene.

    I tried the demo build in the asset store. It allowed me to channel audio separately to channels 1 and 2 of my 8-output audio interface, but didn't give me options to try the other channels, so just seeing if this is possible before going ahead and buying it.

    Thanks!
     
  33. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    hi
    outputs have to configures on OS audio level the plugin can see only what's already accessible by user
    this looks like it's doable but needs work, there's nothing exactly for this OOTB
    if you want to restrict inputs audition you'd have to either script it, or if relying on spatializaiton do note that using microphone input with builtin spatialization would mean rather large latency on the final output
    / it's possible to use other spatialization such as google resonance for better results
     
  34. ProjectileVomitTV

    ProjectileVomitTV

    Joined:
    Aug 12, 2020
    Posts:
    30
    Thanks for getting back to me so quick!

    So do you mean if I didn't see the extra outputs (3-8) in the demo build, I'm unlikely to be able to access them? I'm guessing when I set the default output for Mac OS it's automatically setting the stereo out and not the full 8 channels, though when I play 5.1 surround sound on my system it's working correctly. I'm not sure if Soundflower works on Macs anymore, but that used to give the option to set a multichannel version as the default OS output. I'll try that with the demo to see if it works.

    It's not really that I want to restrict audition within, I want to be able to route audition to particular channels. For example, spatialised stereo of the presenter mic could go to channels 1 & 2, everything else to channels 3 & 4. I can then choose what's heard by whom outside of unity by routing the outputs.

    I've already got the mics running into Unity and being spatialised. There is some latency but it's an acceptable level. I'll check out Resonance though. Thanks for the tip!

    Btw, does AudioStream work with unity's built-in audio system, or it its own system, like FMOD is?
     
  35. ProjectileVomitTV

    ProjectileVomitTV

    Joined:
    Aug 12, 2020
    Posts:
    30
    I just used Blackhole 16 channel virtual output and had access to all 16 channels in the demo build, so that's a success. So if I can route audio within unity to the different output channels I'll definitely buy AudioStream. Can I just double-check that this is the case? Doesn't seem like there are any other tools doing what yours does, if routing inside unity is possible. Great work!
     
    r618 likes this.
  36. ProjectileVomitTV

    ProjectileVomitTV

    Joined:
    Aug 12, 2020
    Posts:
    30
    I've taken the plunge and bought AudoStream. If there's any documentation that's relevant to what I'm trying to do, could you point me towards it, please?
     
  37. ProjectileVomitTV

    ProjectileVomitTV

    Joined:
    Aug 12, 2020
    Posts:
    30
    Hi, I'm getting a whole load of "type or namespace could not be found" errors straight from loading the asset. Also a couple of deprecated build target warnings for libfmod.dylib. Any idea how to fix the errors? I'm in Unity 2020.2.6 and Mac OS 11.2.3
     
  38. ProjectileVomitTV

    ProjectileVomitTV

    Joined:
    Aug 12, 2020
    Posts:
    30
    Sorry! I should really learn to read the "read me" first. It's all in there
     
  39. ProjectileVomitTV

    ProjectileVomitTV

    Joined:
    Aug 12, 2020
    Posts:
    30
    Hi, is it possible to set the output device for a ResonanceInput? I can't work out how. If I can though, that means I can send one mic through a resonance instance to one virtual output (eg soundflower) and one to another (eg Blackhole).
     
  40. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    this is problematic for more than source and them having different output channels (different mix matrices) in that AudioStreamOutputDevice sets the same mix matrix for *all* sounds on *given* output - if you had your mics routed to different output each could use their own set of channels (mix matrix) -
    I added new set of components where each sound can set its own mix matrix - MediaSourceOutput* - over time which use FMOD directly not going though Unity audio, but input streaming is missing for them
    It's not impossible to add missing DSPs which would capture input instead playing file media but it's not there right now
    - in case of Resonance you can set mix matrix on playing channel and it will play on the output based on it ( at least I think it should work this straightforwardly )

    with existing component this is done w/ AudioStreamInput(/2D) + AudioSourceOutputDevice components on the same game object but note that if output is the same for both mics only one matrix will be used as mentioned above

    Ask questions if something's not clear, but It should be possible to do all changes you want using existing sources since all parts are there
     
  41. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    should be doable w/ setOutput on created system but I'd have to verify everything to see if it can be set up properly
    to sum this up you probably want Resonance inputs each play on its own output and with its own mix matrix to filter output channels - should that be correct ?
     
    ProjectileVomitTV likes this.
  42. ProjectileVomitTV

    ProjectileVomitTV

    Joined:
    Aug 12, 2020
    Posts:
    30
    Yeah, that sounds about right. I need to send the two mics (via resonance inputs) either to different channels of the same multichannel output device or separate output devices. It seems like the easiest way to get separate outputs from unity, since other ways I've looked at only involve mono sound.
     
  43. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    FWIW you can test two microphone inputs with existing solution by using AudioStreamInput component which does spatialization and adding AudioSourceOutputDevice to each separate game object - you can then set mix matrix on each separately for separate output devices/channels
    hard to tell when I'm able to add some extra device support to ResonanceInputs properly - have to find some time for this
     
  44. ProjectileVomitTV

    ProjectileVomitTV

    Joined:
    Aug 12, 2020
    Posts:
    30
    Yeah, this is the bit I'm struggling with, working out how to access the created system for a resonance input in order to set the output. Is there a way of doing it?

    I think this is one of the first things I tried yesterday. If I remember right, it did work but the latency was quite high. I'll have another look at it again. The latency might still be workable. I just need the two people with a mic each to be able to have a coherent conversation, and not get wildly out of sync with their WebCamTexture videos.
     
  45. ProjectileVomitTV

    ProjectileVomitTV

    Joined:
    Aug 12, 2020
    Posts:
    30
    Ah ok, this is;t what I tried. I just connected Unity's microphone through an AudioSource into an AudioSourceOutputDevice. I'm not sure how to connect an AudioStreamInput with an AudioSourceOutputDevice. Same with connecting a ResonanceInput with an AudioSourceOutputDevice. I'm not sure how to pass one to the other. Do I need to directly access the mix matrix. How do I do that, in that case?
     
  46. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    attach both on the same game object (to be safe AudioStreamInput above AudioSourceOutputDevice in the inspector (due to how unity audio buffer (used to) work/s)
    this won't work; the resonance (and MediaSource*) components don't go through unity audio
    for mix matrix usage pls see demo scenes where output channels are used - AudioSourceOutputChannelsDemo and MediaSourceOutputDeviceDemo IIRC
     
    Last edited: May 2, 2021
  47. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    everything beginning with 'AudioSource' or 'AudioStream' is compatible with Unity game objects, the rest (except mixer plugin which can be used with game objects with some limitations) is using FMOD Integration directly and not going through Unity audio
     
  48. ProjectileVomitTV

    ProjectileVomitTV

    Joined:
    Aug 12, 2020
    Posts:
    30
    Ah right, I've got it now, thanks! It's actually way simpler than I was trying to do.
     
  49. CONGOBILL

    CONGOBILL

    Joined:
    Mar 4, 2020
    Posts:
    17
    Hi, I would like to play multiple audio sources, on different output devices on Linux. For example, a rally game that all sounds are played on the speakers, but the co-driver's talk is played on my headphones. is this possible with this plugin? Thanks
     
    AaronMahler likes this.
  50. jGate99

    jGate99

    Joined:
    Oct 22, 2013
    Posts:
    1,952
    Hi @r618
    I want to build a podcast player using your plugin just like attached ui
    with controls like seekbar, play, pause, fast forwaded and reverse 10 seconds, close captioning, 2x, 3x speed ,
    remote audio stream like https://chtbl.com/track/28D492/traffic.megaphone.fm/SLT2018420425.mp3 and downloading that stream for offline playback
    etc

    Can please provide an high level view of which plugin components should i use for this and what is offered out of box and what is needed to custom code

    Thanks