Search Unity

AudioStream - {local|remote media} → AudioSource|AudioMixer → {outputs}

Discussion in 'Assets and Asset Store' started by r618, Jun 19, 2016.

  1. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    I'll look into it as time permits - ideally I'd leave the scrubbing to timeline, I'll see if this is possible - if not, that'd probably be a longer endeavour
    I'll post here in either case
     
    rekka3000 likes this.
  2. rekka3000

    rekka3000

    Joined:
    Feb 9, 2014
    Posts:
    46
    I’ll look into it too!:)
     
  3. rekka3000

    rekka3000

    Joined:
    Feb 9, 2014
    Posts:
    46
    So I did a bit more research. Seems its already kind of possible with your tool but Unity crashes. Here is what I did to get it working with the timeline. Scrubbing also worked with this. Perfect except for the Unity crash.

    1. I opened your demo scene "OutputDeviceDemo"
    2. On the game object "AudioSource + Output Device" I added a "Playable Director" script.
    3. I then went to Asset > Create > Timeline
    4. I added the newly created timeline to the "Playable Director" box called "Playable" added on step 2.
    5. I turned off the "Audio Source" on the game object "AudioSource + Output Device".
    6. I added the sine audio file as an audio clip to the timeline and set it to loop.
    7. To the left of the audio clip you can select an audio source. I selected "AudioSource+OutputDevice" as the audio source.
    8. I moved the Playable Director script to just below the disabled Audio Source.

    Pressing play brings up the GUI you made to select output devices and to play / stop the clip. Pressing Play seems to play the audio clip on the timeline and selecting the output device also works. I've tried it with different audio clips I have, just to make sure it wasn't secretly playing the "Sine" audio file that you had already set and it worked fine.

    Unity crashes when you stop playing the project though. If you play the project and then turn it off, Unity consistantly crashes and spits out a log file. I've attached the log here and a screenshot of the crash. I'll submit it to Unity too but wondering who's side this crash happens on. Any ideas?
     

    Attached Files:

  4. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    the way the redirection works is it takes the signal - that comes from an AudioSource and its AudioClip - currently being played on the gameobject, then passes it to FMOD callback ( which I'm currently in the process of redesigning a bit due to IL2CPP ) and - optionally mutes the original signal -
    the last part has a side effect that if you don't mute ( mute after routing is disabled ) audio can be played simultaneously on output device id, and on default ( played by unity ) - but that's just an aside.

    I have no idea how exactly Timeline works with audio sources internally, I'd have to test it properly - but I'm in the process of fixing more stuffs, also not related to AudioStream, so it'll take few days at least

    I'd say your setup is probably OK, but AudioSourceOutputDevice would probably need some changes to work with it properly I think.

    btw there's also editor log which you might want to look into: set log level to DEBUG on the output component before running, editor log can be opened from Console menu in the upper right corner - this way you can obtain its location
     
  5. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    Just submitted an update with few fixes and demo scenes updates for demo builds:
    - reworked demo scenes + added main launch screen for demo builds
    - improved IL2CPP compatibility for a remaining scene, which was not compatible
    - several fixes when disabling/stopping playback and input on game objects
    - improved LAN exchange by moving network loop to FixedUpdate
    - added public demo builds for Windows (x86 and x64), macOS and Android
     
  6. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
  7. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    @rekka3000
    it's possible to setup a timeline clip together with AudioSourceOutputDevice:
    - I made a copy of OutpuDeviceDemo scene, deleted everything except Main Camera and AudioSource+OutputDevice game object ( assign whatever clip you want to to AudioSource - note: this will be the same clip as referenced on timeline - *should be probably the same, although not sure - and not tested if not* )
    - create new PlayableDirector game object, add PlayableDirector on it, create your timeline asset
    - add new audio track as normal, make sure AudioSource+OutputDevice game object is assigned and add from audio clip the clip mentioned above
    - make sure everything is ok on PlayableDirector; that's all for timeline
    - make sure AudioSourceOutpuDevice looks like this:
    upload_2018-3-25_21-25-15.png

    ( 2 is my headphones output )
    - you'll need to modify AudioSourceOutputDevice.cs source:
    comment out automatic sequence at Ln 134, but make sure FMOD still picks up the output buffer:
    Code (CSharp):
    1.             //if (this.autoStart)
    2.             //{
    3.             //    var _as = this.GetComponent<AudioSource>();
    4.             //    if (_as != null)
    5.             //        _as.Play();
    6.  
    7.             //    this.StartFMODSound();
    8.             //}
    9.  
    10.             this.StartFMODSound();
    That should be all - it works because Timeline uses the gameobject referenced on it for playback;
    I should probably make sure to properly decouple this or something so the usage will be easier in the future.
    ( the upcoming 1.7.4. which is pending approval doesn't have it yet)
    Tested in 2017.3.1p4

    There is for - the time being - main caveat that you can play only offline audioclips with this of course.

    Let me know if this makes sense
     
    Last edited: Mar 25, 2018
  8. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    this 1.7.4 update with the above changes is live
     
  9. rekka3000

    rekka3000

    Joined:
    Feb 9, 2014
    Posts:
    46

    Thank you! That process works perfectly! :D If you could make a demo scene to show how people should do that, I think thats a worthwhile feature to list (as it wasn't immediately obvious). Thanks again! Perfect! :D
     
    r618 likes this.
  10. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    btw that'd be a little problematic since I distribute it with 5.5.4 right now, which never even dreamed of Timeline )
    but I will make sure the component is more suited for this and mention the whole process and compatibility in the documentation, at least
    thanks for diggin into this and bringing this to my attention @rekka3000 !
     
    rekka3000 likes this.
  11. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    Rather embarrassingly I left a reference to not included source in latest update; the concrete error is "The type or namespace name `GVRSourceInput' does not exist in the namespace `AudioStream" (╯°□°)╯彡┻━┻

    Feel free to delete 'AudioStreamInputEditor.cs' in AudioStream/Editor, it's harmless otherwise.

    :FeelsBad: & sorry
     
  12. JonDadley

    JonDadley

    Joined:
    Sep 2, 2013
    Posts:
    139
    Hey! I've been using Audiostream for a while on multiple projects and love it - such a good plugin!

    One small issue - if I switch my project to the .NET 4.x as the Scripting Runtime Version, I get the following error spammed to the console from Audiostream:

    Station2 [ERROR][2018-03-30T14:52:43] [PCMReaderCallback] ERR_INVALID_PARAM - An invalid parameter was passed to this function.
    ==============================================
    UnityEngine.Debug:LogError(Object)
    AudioStream.AudioStreamSupport:LOG(LogLevel, LogLevel, String, EventWithStringStringParameter, String, Object[]) (at Assets/AudioStream/Scripts/AudioStreamSupport/AudioStreamSupport.cs:78)
    AudioStream.AudioStreamSupport:ERRCHECK(RESULT, LogLevel, String, EventWithStringStringParameter, String, Boolean) (at Assets/AudioStream/Scripts/AudioStreamSupport/AudioStreamSupport.cs:56)
    AudioStream.AudioStream:pCMReaderCallback(Single[]) (at Assets/AudioStream/Scripts/AudioStream/AudioStream.cs:109)
    UnityEngine.AudioClip:InvokePCMReaderCallback_Internal(Single[])​

    This error doesn't appear when using the old .NET 2.5 runtime. Now that Unity have made the 4.x runtime stable and it's no longer experimental, is there a chance you could update Audiostream to be compatible with it? It's hopefully just a small fix.
     
    r618 likes this.
  13. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    Thanks for letting me know @JonDadley - , I'll look into it
    I've been rather conservative with Unity versions, and beta is currently somehow wobbly for me, but this certainly makes sense - thanks !
     
    JonDadley likes this.
  14. jessekirbs

    jessekirbs

    Joined:
    Apr 4, 2014
    Posts:
    18
    This was helpful, thank you very much. Did you ever get a chance to test out the live streaming from a DAW? I think your plugin will be perfect for what I'm doing if this works. Thank you!
     
  15. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    If you consider system output to be a DAW output then yes (read: the source is not that important with this setup) - this a funny video I made some time ago:

    You would have to tie everything together with resonance, the initial latency is almost realtime as you can see in the video - but it remains to be seen like the real complete scenario is perceived by human
    I will likely test this later too after next update is ready
     
  16. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    Just submitted 1.7.5 update, mainly with better AudioStreamInput* scenes, AudioStream should be much better on shaky networks and overall and checked .NET 4.x runtime

    - AudioStreamInput*: added multichannel input information to demo scenes
    - AudioStreamInput*: exposed advanced parameters for DSP buffers in the demo scenes
    - AudioStreamInput*: fix when stopping/changing the scene
    - AudioStream: updated network read by moving it to its own thread; note: this improves error condition handling, recovery and end of playback detection for files significantly; also buffer fill percentage is now updated properly
    - AudioStream: fix for few samples might be omitted at start when opening a file
    - AudioSourceOutputDevice: removed direct AudioSource dependency (this enables its usage on Timeline (just in runtime for now))
    - AudioSourceOutputDevice: updated demo scene, includes an example usage for AudioStreamMinmal now
    - project is compatible with .NET 4.x runtime, demo applications are built with it too
    - compatible with .NET 4.x and .NET standard APIs
    - updated README with respective changes and updated all FMOD links to be current
     
    JonDadley likes this.
  17. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    live together with 1.7.5.1 hotfix
    (1.7.5 was missing a file - not sure what exactly went wrong but seems to made it this time)
     
  18. ddsinteractive

    ddsinteractive

    Joined:
    May 1, 2013
    Posts:
    28
    Does this work with a video's audio source to be able to select one of two available audio outputs?
    Headphones
    Default Built-in Speakers

    Thank you!
     
  19. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    It does, but not without some work currently
    Necessary precondition is to set Audio Output Mode to Audio Source on VideoPlayer and you'd have to create an AudioClip since VideoPlayer does not create one automatically (this is necessary since AudioStream uses audio callback which is not called if the clip is not present)
    Then it's possible to use redirect as with any other AudioSource -
    A rough code to add somewhere on the GO with AudioSourceOutputDevice sometime before starting the playback would be:
    Code (CSharp):
    1.             var asource = this.GetComponent<AudioSource>();
    2.             asource.clip = AudioClip.Create("", 128, 2, 48000, true, null);
    3.             asource.loop = true;
    4.             asource.Play();
    (I recommend to read proper channel and output rate values from the system though)
    I'm not sure if/how to handle this automatically right now would have to think about it
     
    Last edited: Apr 5, 2018
  20. WereVarg

    WereVarg

    Joined:
    Apr 9, 2012
    Posts:
    20
    Cannot find this info in the answers or description. If we are streaming the single audio file can we use something like scrub or rewind just to go to the new point inside the audio file? Can we get its lengths without downloading?
     
  21. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    It treats files and 'normal' streams the same way, so it's unfortunately not possible; It's mentioned in the description -
    "Note: being streaming oriented it does not provide typical offline audio processing such as advanced clip offline signal manipulation such as seeking back and forward, or selective time precise playback."
    but I guess it's somewhat hidden
    [technically it fills its own very small looping buffer in real time from network which content is thrown away immediately after played back by Unity]

    - for this to work I would probably have to download the whole file first going though fmod decoder as fast as possible i.e. at network download speed, not in realtime, save it and re/construct the AudioClip from its data
    I'll make a note in backlog, but no promises currently
     
    WereVarg likes this.
  22. jessekirbs

    jessekirbs

    Joined:
    Apr 4, 2014
    Posts:
    18

    Hey man, bought the plugin and everything seems to be working great! However I cannot seem to get AudioStreamInput spatialized with Resonance. I don't quite understand what components are necessary and which settings to use.

    I've tried turning Resonance on to the AudioStreamInput demo and I lose audio. I've tried adding AudioStreamInput compononents to the GVRDemo and can't get that working either.

    Would you mind setting up a scene that is AudioStreamInput 3D but with Resonance enabled? I just want direct audio input streaming to be postional. Thanks so much!


    EDIT:

    I've found a temp solution by moving the slider on the audio source from '2D' to '3D', although I don't know if this is utilizing resonance or not.

    I am trying to drop in an OVR_Camera (Oculus camera player) into the scene, and as soon as I do the audio is no longer audible. The cube still reacts to the audio input however so there's signal going in. Any idea why this wouldn't work?
     
    Last edited: Apr 11, 2018
  23. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    @jessekirbs this is setup I used with one of the Resonance package provided demo scenes:
    - import Resonance - don't forget to go to Project Settings -> Audio and pick Resonance Audio for Spatializer Plugin ( you can Ambisonic Decoder, too, but it's not needed for this ( I think .)
    - import FMOD Unity Interations
    - import AudioStream and set it up (I usually clean everything up as mentioned in readme at the beginning)
    - open ResonanceAudio/Demos/Scenes/ResonanceAudioDemo scene and select Cube object
    - I just added either AudioStreamInput or AudioStreamInput2D component - both works
    (AudioStreamInput2D has better latency)
    - I pointed it to some record device Id ( e.g. 1 which is my loopback enabled normal desktop speakers)
    - when the scene is run I hear the audio being played on the PC speakers being spatialized on the cube (with some delay)
    - note you will get feedback if the input is loopback and you get close to the cube since it's being played on the same output
    ^ depending on what you want to do you can either silence the output ( by adding AudioSource Mute script on it ) and only react to input
    if you want to play it I will have to think more precisely later.

    ( in general both AudioSource and ResonanceAudioSource need to be attached to the game object and resonance spatializer has to be picked in audio settings )


    That won't help much ( well, actually, I'm not sure, but I'd bet that resonance won't respect this setting - it is for original unity spatializer which is only horizontal btw - I'd leave it to 2D to be sure )


    Be sure to setup all component as in e.g. ResonanceAudioDemo scene - you need ResonanceAudioSource as mentioned above, but also RsonanceAudioListener - usually on main camera/player
     
  24. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    One more thing: added AudioStreamInput*s will overwrite whatever is being played on the game object's AudioSource -
    if you need it to play also something else simultaneously I would probably recommend setting separate game object without input components currently
     
  25. jessekirbs

    jessekirbs

    Joined:
    Apr 4, 2014
    Posts:
    18
    This all worked perfectly! Sounds great.

    Unfortunately, when I add an OVR_Camera or OVR_Player, even with ResonanceListeners on cameras and everything working beforehand the audio is gone when using the Rift. Oddly enough, when I stop the simulation theres a brief audio pop with the correct input.

    Would it be possible for you to try pulling an OVR_Camera into the ResonanceDemo to see what may be wrong? Thanks so much for your quick response and information.
     
  26. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    What is this OVR_Camera and OVR_Player and what is address of his blog ? ;0)
    just kidding : aren't those Oculus spatializer components ?
    If so, Oculus uses its own spatializer plugin in Unity, too which are different from Resonance - you'd have to download and setup Oculus in similar way as Resonance above, but all steps won't be the same, of course.
    Let me know if this is the Oculus thing and whether you are able to modify their demo ( I'm sure there's one )

    EDIT: I see you mentioned Oculus in previous post - sorry I overlooked it !
    Has nothing to do with Resonance obviously :{ o_O
     
  27. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    @jessekirbs If you have troubles setting Oculus scene up let me know and I'll have a look after some delay later today
    and sorry again for the confusion! :)
     
  28. jessekirbs

    jessekirbs

    Joined:
    Apr 4, 2014
    Posts:
    18
    Haha, they're actually prefabs for the Oculus player. So OVR_Camera is just a camera that is controlled by the Rift head tracking and OVR_Player is a controllable Rift player that can move around. They are both found in the Oculus Unity Utilities here.

    No problem at all! It actually was ALSO an issue with my Resonance setup, but you fixed that. Thanks again for all your help.

    I am still having trouble, so that'd be great if you could give it a go. Thanks!
     
  29. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    @jessekirbs

    so for apart from Oculus Utilities for Unity from Core Packages I have downloaded also
    Oculus Spatializer Unity from Audio packages from
    https://developer.oculus.com/downloads/package/oculus-spatializer-unity/
    ( interestingly enough that seemed to include bits from FMOD Studio too; they seem to be latest version (as of now) so safe to skip on import if you have already FMOD Unity Integrations imported prior ( they will be grayed out as they won't be updated ) )

    - setup project as before in the above (substitute Resonance with these two packages from Oculus); I have tested it in the same project as there were no conflicts though - don't forget to pick correct spatializer in audio settings
    - Oculus package asks to be updated once imported - allow it to
    - open OSPNative/scenes/RedBallGreenBall - this one contains sample for ONSP Audio Source
    - I've duplicated SpatializedSound2
    - added AudioStreamInput2D on SpatializedSound3 similarly as before for resonance sample
    - picked correct recording device and spatialization seems to be working ok

    I understand you will have to customize your setup largely - I'd probably use this scene as starting point for spatialized input
    ~ I think this should do it for now
     
  30. jessekirbs

    jessekirbs

    Joined:
    Apr 4, 2014
    Posts:
    18
    Thanks for the reply. The issue is that I actually want to use Resonance and not the Oculus Spatializer. It's best if you pull the prefab 'OVR_Camera' into the scene to see my issue. The player from 'RedBallGreenBall' is just a normal First Person Controller, not VR, and the audio works fine on that. But when I try to use OVR_Camera I lose all audio.
     
  31. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    Right, you have to remove all Oculus audio components then and make sure that you have Resonance listener on the camera then.
    I'll have a look at the prefabs
     
  32. jessekirbs

    jessekirbs

    Joined:
    Apr 4, 2014
    Posts:
    18
    Seems like something weirder is going on. If I have a working version of AudioStream and Resonance without Oculus Utilities imported everything works great. Then I import Oculus Utilities, sound still works but if I pull the OVR_Camera prefab into the scene, the sound breaks. Even if I delete the OVR_Camera prefab and replace it with the camera that was working before it still won't work. It seems like somehow the OVR_Camera is breaking some settings when dropped in the scene but I have no idea which it could be.
     
  33. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    What I did:
    - duplicated ResonanceAudioDemo scene
    - deleted Player prefab ( original resonance )
    - added OVRPlayerController prefab from OVR/Prefabs
    - added ResonanceAudioListener on CenterEyeAnchor (where the main camera is)
    - set CenterEyeAnchor as Main Camera at ResonanceAudioDemoManager for scene to work
    ( and added AudioStreamInput2D on the Cube with proper input as before )

    Can you confirm that this setup does not work for you ?
    - there do not seem to be any other audio components on OVRPlayerController prefab so audio should work as normal with resonance setup

    edit: just to clarify: I don't have OVR_Camera, or OVR_Player in Prefabs, there are OVRPlayerController and OVRCameraRig - don't know if that makes any difference
     
    Last edited: Apr 12, 2018
  34. jessekirbs

    jessekirbs

    Joined:
    Apr 4, 2014
    Posts:
    18
    This seems to have done the trick! I think I had forgotten to add CenterEyeAnchor as the Main Camera in the ResonanceAudioDemoManager. Thanks so much for your help!
     
    r618 likes this.
  35. oldbushie

    oldbushie

    Joined:
    Mar 30, 2012
    Posts:
    24
    Does this work on WebGL? We are trying Unity's WebGLStreamingAudio asset but it seems to have a fair amount of latency; would your plugin have better latency?
     
  36. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    hi @oldbushie
    FMOD supports Unity WebGL builds since 1.10.04, but unfortunately not all functionality is there yet - so any streaming on WebGL currently does not work; no demo scene in fact currently work on WebGL
     
  37. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    After some collaboration with fine folks who needed it I've added new multichannel output separation demo scene (MONO clips as source only for now, can be adjusted via custom mix matrix as needed though) and just submitted 1.7.6:

    - AudioStream component compatible with UWP/Hololens (conditional UNITY_WSA defines for thread/task)
    - AudioStream network thread timeout is now adjusted continuously instead being hardcoded
    - cleaned up demo scenes UI and explanation texts
    - new MultichannelOutputDemo scene showing how to use multichannel separation of an output with MONO audio clip and AudioSourceOutputDevice component

    Demo builds should be already up to date
     
  38. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    review times got improved - the update is live

    forgot to mention (also in the changelog) better input.recording compatibility on some android/Daydream devices
     
  39. Towerss

    Towerss

    Joined:
    Feb 17, 2018
    Posts:
    8
    Hi Guys,

    I am interested into buying this plugin for a project, but not sure if it suits the job.

    I have a multiuser aplication that runs with Photon Unity Network (PUN). With PUN, audio is streamed to the PUN server and then sent to all the players in the game. PUN has a wrapper class on top of the core Unity audio classses to do their streaming. I need an application that would allow me to:
    1. record to a wav file the incoming audio streaming from the server,
    or
    2. redirect the PUN streaming to another service feed.

    Cheers :)
     
  40. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    Hi @Towerss
    first of all - AudioStream currently has no idea about Photon or PUN, the (limited) networking part it does is limited to UNET and LAN only currently.

    - It contains one utility script for saving the audio of a GameObject but it's probably not worth getting only because of that since you can probably find and code this after little research.

    If by 'service feed' you mean different audio device (having nothing to do with networking) then AudioStream can help you - but if it's other e.g. networking service I'm not sure how to approach this since as mentioned above I've never worked with Photon so far / - you might be able to pass audio buffer along based on current implementation but there's no support for it, you'd have to code it yourself /
     
  41. Mihaylevskiy

    Mihaylevskiy

    Joined:
    Aug 30, 2017
    Posts:
    6
    Hi. I try the demo AudioStreamInput2DDemo (Windows). When I start and stop recording several times, the audio stream from microphone delayed (1 - 2 seconds). How to fix it?
     
  42. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    Currently only by changing/leaving the scene and then returning back
    (The buffers used for incoming signal are reused by FMOD rather often and there were crashes while freeing them when stopping since their lifetime was not entirely clear - so they remain active to prevent that.
    Let me know if this limits you - but this should be probably fixed anyway - I'll see to it)
     
    Last edited: Apr 30, 2018
  43. prawn-star

    prawn-star

    Joined:
    Nov 21, 2012
    Posts:
    77
    Hello

    Is it possible to host the README for iOS/Android? I would like to know for iOS of the IOS Control Center will work with your plugin. So I can play/pause/stop audio while app is minimised instead of just volume control
     
  44. Mihaylevskiy

    Mihaylevskiy

    Joined:
    Aug 30, 2017
    Posts:
    6
    In my project, the microphone is started and stopped frequently, as well as its change. When this problem will be solved, inform please. I would like solve it faster. Thanks!
     
  45. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    There's nothing special about it at Unity side - just enable Custom as Behavior in Background with 'Audio, AirPlay, PiP' option in Editor and probably add
    Code (CSharp):
    1.                 [[AVAudioSession sharedInstance] setCategory:AVAudioSessionCategoryPlayback error:nil];
    2.                 [[AVAudioSession sharedInstance] setActive: YES error: nil];
    somewhere before entering background in the app controller (depending on version of Unity - lately *seemed* to be working without it though) to keep audio playing in the background -
    for responding to command center event you'd need to correctly implement the MPRemoteCommandCenter delegate though - which is not covered in README -
    and possibly notify Unity player, too (<- if that will work ).

    Note: I don't recommend using Unity for this type of applications since the setup is - to say the least - somewhat clunky and probably concrete Unity version dependent and in order to receive the remote events in the Unity application that would have to be running the player loop constantly in the background which is not friendly to the battery *at all*

    You don't need any plugin to test everything out btw - just 'stock' Unity iOS application.
     
  46. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    faster is our command!

    I've just updated the demo builds - find them on links from asset store page and test stuff out.
    I will submit the update to the store momentarily when I find out some stuff from API history

    Cheers!
     
  47. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    update is live on both (old and new) stores:

    v 1.7.6.1 052018
    - tested w FMOD 1.10.05
    - fix for repeated re/starting of the recording buffer
    - added and cleaned up some more minor stuff such as useful build info to demo scenes
     
  48. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    One more issue discovered with connection to the above
    Demo builds are updated already, so you can give it a go if you want @Mihaylevskiy , hopefully everything's in order finally, but please feel free to let me know should anything not be right

    Given very quick review turnarounds lately package should be live on the store within 24-48h
     
  49. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    1.7.6.2 is live on both stores
     
  50. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    In the next update I will address
    - drifting in AudioSourceInput2D which can occur over longer period runs and will become apparent mainly if input and output sample rates differ significantly (this is resulting in increasing delay / latency over time), due to how currently resampling works by just changing the pitch (not optimal)
    I will leave the option to use existing method as it is now - i.e. to use Unity to do resampling, or use new method which can do it continuously over input buffer *)
    - minor compatibility fix related to networking info in Unity 2018.2 beta

    ------
    * there are multiple ways of dealing with different no. of input and output channels -
    the method I'm using will be documented in the source, but any custom mapping between input and output channels will be have to be done by user - I will probably allow setting this up via some 'SetMixMatrix' method - (a common scenario such as {1,2} : {1,2} is doable automatically though)