Search Unity

AudioStream - {local|remote media} → AudioSource|AudioMixer → {outputs}

Discussion in 'Assets and Asset Store' started by r618, Jun 19, 2016.

  1. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    Hi @CptDustmite ,
    - I recommend flagging game objects / components which need OutputDevice component added in some way and operate only on those. You can flag them by any means appropriate ( create a Tag for them, put them on separate Layer, just name them appropriately, or if they all have a common component, expose a boolean flag on them if possible )
    ^ I hope I got your problem right

    The reason for this is FMOD has a hard limit on number of systems which can be created and due to how it is used it's easier to create system each time it's needed (otherwise (possibly much) more elaborated and complicated management would be needed especially for OutputDevice component, so I decided not to complicate this and offload some of this on the user for now, but it's probably time to think about this)

    I think it's 10 or so instances currently (so 10 different AudioStreamOutputDevice components can be running at the same time) - please let me know if you can live with this limitation

    - another option would be having only few ASOD gameobjects/components and play all necessary audio on them (which might not be suitable for e.g. 3D setups)
    I hope I got the problem right, let me know if you were able to workaround this for now
     
  2. SimeonOA

    SimeonOA

    Joined:
    Jul 17, 2018
    Posts:
    4
    Hi @r618,

    I just noticed AudioStream and am wondering if it can serve our purpose.

    We are using the Samsung Galaxy S9+ and are trying to send different audio signals to two different audio devices simultaneously.

    S9+ allows us to connect to two different devices via bluetooth.

    Is it possible to send two different audio signals to the two devices preferably via bluetooth? If we cannot send to both devices via bluetooth, is it possible to send from Unity the signals to one of the devices via wired and the other device via bluetooth?

    How best can any of these be achieved?

    Thanks

    Simeon
     
  3. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    Hi @SimeonOA ,
    it all boils down whether unity/fmod can see the devices or not
    I recommend downloading the demo apk and opening 'OutputDevice' demo scene on your phone - it has list of all currently connected devices visible in unity application
    If you can see the needed outputs you can play on them e.g. AudioSource as usual
    If not, things might get more complicated and you might want to look for native plugins on Android for this, but I'm afraid I'd be of not much help with this
     
  4. SimeonOA

    SimeonOA

    Joined:
    Jul 17, 2018
    Posts:
    4
    Hi @r618 ,

    I tried the OutputDeviceDemo scene. It does not show any of the devices as an output.

    However when I played each of the audio options: AudioStream.AudioStream, AudioStream.AudioStreamMinimal, and UnityEngine.AudioSource; I still had the audio come through on both devices.

    Does this mean we can still use AudioStream?
     
    Last edited: Jul 18, 2018
  5. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    Hi, am afraid not, unfortunately
    Since you need different signals on different connected BTs and the app can't see them there's no way to direct on any of them
    - Android seems to be playing what's on main output on everything currently connected - this is similar on iOS btw where there's some level of support to distinguish between main and BT device, but I never got it working -

    Would probably require rather deep dive into Android audio system to see how/if it's possible, - I recommend looking at Android native audio first to see if it's possible and only then trying integrating into unity if so
    In any case - AudioStream does not have support for separating output to Bluetooth connected devices currently (ideally they would show up in the demo scene as separate options).

    Hope it helped regardless!
     
  6. SimeonOA

    SimeonOA

    Joined:
    Jul 17, 2018
    Posts:
    4
    Hi @r618,

    Thank you so much.

    We are currently working with using different frequency ranges to get different outputs.

    Thanks for taking the time to answer my questions
     
    r618 likes this.
  7. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    Just letting anybody interested know -
    I think I've solved output device in AudioMixer (Windows only for now*) -
    ASODMixer.PNG

    - left mixer group - RoutingReceive1 plays on system output 2, RoutingReceive2 plays on system output 1, both connected to 'RoutingSend' via stock Unity Send/Receive effects, main output is muted manually - the signal is passed along in the mixer - so it's possible to distribute one source to as many system outputs as needed simultaneously

    depending also on how testing goes by @Nirvan I should package along other needed stuff in next couple of days

    *) I probably will consider support for macOS in the future, but not sure about mobile platforms since I'm not sure native SDK covers them in the first place; Android would be problematic for me regardless though
     
    Nirvan likes this.
  8. Shovancj

    Shovancj

    Joined:
    Dec 21, 2011
    Posts:
    16
    I ran into a problem with adding more than 8 AudioSourceOutputDevices in the scene.

    Exception: FMOD.Factory.System_Create ERR_MEMORY - Not enough memory resources.

    as soon as the 9th is added this error pops up.

    I only need AudioSourceOutputDevice per device I have plugged into my machine.

    Is there a way I can have a single gameobject with one AudioSourceOutputDevice per OutputDriverID that I can somehow direct the audiosource too when i want it on the OutputDriverID?
     
  9. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    Hi,
    in the next update (I'll submit it next week)

    if you are on _and_ targeting Windows you can use Windows native mixer plugin for this, which will be part of it
    (no other platforms currently supported - if this suits you, please PM me your invoice# for the plugin dl)

    To clarify: you will have option to choose between native plugin on windows for this and current component based approach, which will be changed to create FMOD System per output, not per game object for all platforms.
     
  10. Shovancj

    Shovancj

    Joined:
    Dec 21, 2011
    Posts:
    16
    HEY! thanks for the fast reply

    To Clarify. I should be able to create a prefab with an AudioSource and AudioSourceOutputDevice with OutputDeviceID of 1.

    I think should be able to instantiate that prefab 20 times and not run into this error because they all have an OutputDeviceID of 1. IE Creating only one FMOD System.

    If I were to change the device ID of any of them to 2. then another FMOD System would be added.

    Thanks again!
     
  11. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    Yes
    There should be no limit afterwards that I know of (you'll be creating just sounds on existing fmod system)
    test it once it's out and let me know
     
  12. Shovancj

    Shovancj

    Joined:
    Dec 21, 2011
    Posts:
    16
    God I hope this works lol..

    Will PM you the Invoice# in a moment.
     
  13. unity_4tqHaOJ1aemOjQ

    unity_4tqHaOJ1aemOjQ

    Joined:
    Jul 30, 2018
    Posts:
    2
    Hello,

    I am really new at working with audio and I have a question.
    I need to implement a voice chat app and I am considering using this library as it seems to cover pretty much everything I need.
    I already have a in-house relay server set up (The server supports TCP and UDP) and I was thinking of sending the audio data (byte array, I assume) to this server and relay to other devices.
    Does this library expose the captured audio data as byte array and allow me to playback as I stream from the server?

    Thank you in advance
     
    Last edited: Jul 30, 2018
  14. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    Hi
    it has two ways of pushing audio over network:
    1] custom OPUS encoded stream between two UNET endpoints
    2] PCM (I think PCM16), or OGGVORBIS encoded container which is being pushed to chosen and configured Icecast instance via .net TCP stream
    So while it does not expose captured audio somehow, in case of latter you can write your own authentication or custom handshake by e.g. adopting existing IcecastWriter / IcecastSource and substitute with your own components

    For playback you need the stream to be compatible with FMOD. If you download and run demo it should play your entered server stream url
     
    Last edited: Jul 31, 2018
  15. unity_4tqHaOJ1aemOjQ

    unity_4tqHaOJ1aemOjQ

    Joined:
    Jul 30, 2018
    Posts:
    2
    Hello,

    Thank you for the fast response!
    This might sound like a dumb question, but If I were to go with the option #2, I would have to have my Icecast server?

    Cheers!
     
  16. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    The Icecast server requires certain information on initial http put request on which it responds - that's what's implemented in IcecastWriter - once the connection is accepted all audio data is being progressively written to opened socket (.net StreamWriter)
    So no - you don't need Icecast server, but you need to implement your own logic since I can't know what your in-house relay sever requires to establish a connection and start accepting data
    As I mentioned previously the audio format in this case is PCM16, which you might want to alter depending on what your server accepts
     
  17. CptDustmite

    CptDustmite

    Joined:
    Jun 17, 2014
    Posts:
    61
    Thanks for the reply again.

    I have minimised the amount of OutputDevice components as suggested, I can get by with just 1. No more memory errors on the console.

    My issue is now - I am having strange audio issues in my project, which I have never encountered before in my 8 years of using Unity, however as an added challenge my project is a VR one switching audio between VR headset and a touchscreen TV. So this problem could potentially be caused by that configuration + PC hardware limitations. But I thought I'd ask if it maybe has anything to do with Audiostream/FMOD installed.

    Problems:
    1) Occasionally during the build, all audio will stop entirely and you cannot hear anything. Restarting PC will fix this. This is a big problem because our application is for release.
    2) (Happens occasionally) Play build, it changes output device to TV correctly. Close build. Turn off TV. Turn on TV. Open build. It does not change output device to TV, cannot hear any audio.
    3) There was also an occasional issue where all the audio would come out very distorted/garbled such that it was not playable, although this might have been an issue with the cable on our VR headset.

    There are no console errors for the above.

    I did find some occasional mentions of FMOD and audio stopping, not sure if they're related at all but it did make me think perhaps FMOD was involved:
    https://www.reddit.com/r/GameAudio/comments/46tcml/fmod_sounds_stop_playing/
    https://forums.oculusvr.com/develop...udio-cutting-out-after-a-while-using-fmod-ue4
    https://www.fmod.org/questions/question/audio-cutting-out-in-unity-5-6/

    Do you happen to know if there's anything in AudioStream/FMOD that could be causing the above issues? They do not happen at the time of audio switching, they happen later in gameplay. So if it is related then it must be due to ongoing things in the background perhaps.

    Thought I'd ask here in case it's related. Is there some sort of cleanup method that I'm not calling? Thanks!
     
    Last edited: Aug 2, 2018
  18. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    Hello, thanks for letting me know

    It might do with fmod system being created fresh each time as it is now and - most importantly - I think update (fmod function) is even missing currently ( those links are helpful, thanks ! )
    Is restarting PC necessary though ? - it should be just the application (if you have it started at startup that makes sense)

    Please wait for the rewrite I'm currently working on if possible - system will be created just once for each output, it should help with stability

    This looks like the output might be not ready at the time the application is started - it might take a while until the audio device is recognized in the system after the TV is turned back on
    I recommend checking/listing all available devices prior to selecting the output and/or verify the device is available in system playback devices list manually too - it might take a few seconds
    If the problem persists after short wait and you're sure the output is correctly brought up (it might be worth checking with independent application) please let me know

    That might have happened also with incorrectly detected/set # of output channels - if you're using VR headset please try to match output channels in Unity as well if you haven't already ( AudioManager -> Default speaker mode )
    Distortions might have been to a degree caused also by having Best latency selected (audio driver sometimes simply can't keep up) - try going with Good latency, or Default if possible

    I was hoping to release the update already, but it's not ready yet;
    Anyway, I hope the above helped for now
    (I'll send you the update separately prior to submitting so you don't have to wait for store approval)
     
  19. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    Just submitted 1.8 update:

    - project structure should be properly decoupled - you can press delete on any unneeded component/folder in Demos and Scripts - only Scripts/AudioStreamSupport is common and required for all AudioStream scenes/components. Native mixer plugins don't need even those.
    - upon opening main demo scene for the first time, all AudioStream demo scenes are added to the Editor Build Settings, if needed (it might be necessary to restart main scene for changes to be picked up)
    - improved current channels count detection for all components by taking into account AudioSettings.driverCapabilities - thanks to forum user ddf for reporting
    - more accurate (but less frequent) time elapsed for AudioStream by measuring dspTime only when audio is actually requested/played
    - GVR DSP plugins release bugfix
    - significant change how creation of FMOD systems is handled overall: there's only one FMOD system created per output device (for AudioSourceOutputDevice components), one FMOD system for all Unity AudioSources (AudioStream) and each non-Unity sound (AudioStreamMinimal) has its own (since setDriver can be called on them independently)
    - due to above output devices properties are now configured via scriptable object in Scripts/AudioSourceOutputDevice/Resources/OutputDevicesConfiguration instead being directly on each component
    - ^ note: DSP buffer settings were removed for output devices for now
    - new OutputDevicePrefabDemo scene demonstrating instantiation of AudioSourceOutputDevice prefabs
    - new native mixer plugins (Windows x64 and macOS only for the time being) for redirecting directly from audio mixer to other than default system outputs
    - user has option to delete either 'normal' AudioStream functionality, the native plugin, or keep both in the project - each system needs different FMOD package/libraries to be imported so the installation guide was updated as well
    - moved automatic deinitialization from OnDisable to OnDestroy - since the components are supposed to be long running, i.e. for the lifetime of the scene, it makes more sense, and they can be properly dis/enabled individually now as well
    - improved releasing of FMOD system for AudioStream/Minimal under unstable conditions in that the system is actually released now; - using a delay observed from FMOD diagnostics debug logging for file thread -
    - ^ added FMOD.Debug trace wrapper for diagnostics should it be needed
    - native plugins compiled against FMOD 1.10.08

    So I hope all stability fixes and improvements will help people to use it more easily
    I hope I didn't forget something and everything will go smoothly with review and publishing --
     
  20. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    btw thanks to user @ddf for reporting better output channels detection on his setup, @Shovancj and @CptDustmite for testing and suggestions about FMOD system pooling for better output device handling and project structure and @Nirvan for help with testing windows native mixer plugin ! \ (•◡•) /
     
  21. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    I should point out setup is more involved due to addition of new native mixer plugins - but all it needs is just deletion of unneeded parts after import (or keeping them and add new FMOD bits) - all should be described at the top of README though.
    In any case - forums are open, too !
     
  22. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305

    This review was extraordinarily quick, this update is live just one day after submission.
     
  23. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    for user @JARCORP who left review with support question about playing AAC format and went radio silence (review area is really *not* the place for support questions):

    - you have to play the file on the physical device - write or PM me here / if you need help setting it up; (the file must be also DRM free variant on iOS)
     
  24. sticklezz

    sticklezz

    Joined:
    Oct 27, 2015
    Posts:
    33
    Is there anyway to use this so Unity could hear/listen to a mobile (Android, iOS) device's native audio ?

    For purposes of a visualizer game, want to allow any of the user's mobile audio (iTunes, Spotify, etc) to work . The visualizer works with any scene audio, but not device native .

    Is this even possible?
     
  25. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    There are things like AudioBus for inter-app audio on iOS and they probably exist for a reason :) (and both applications must support it)
    In other words I'm not sure you can even hack this somehow on mobiles - it's very likely intentionally not supported by vendors/drivers
    I'm not 100% it's not possible but you'd have to go looking by yourself with native audio on each specific platform (AudioStream/FMOD is built on top each respective platform audio so it does not contain something like this by itself)
     
  26. sticklezz

    sticklezz

    Joined:
    Oct 27, 2015
    Posts:
    33
    Ah thanks, that is what all the evidence is pointing to

    The best luck I've had so far is just forcing user to use speakers and just use the microphone to listen (since speakers are next to microphone on mobile)
     
    r618 likes this.
  27. ramon_delmondo

    ramon_delmondo

    Joined:
    Aug 19, 2015
    Posts:
    22

    Attached Files:

  28. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    Hi @ramon_delmondo, thank you !

    This unfortunately falls to 'not currently supported' category (it's mentioned on the asset page, too: "(+) Please note that M3U/8 support is currently limited to nonrecursive, nonchunked, direct streams only")
    - I should probably clarify there that it's HLS support what is missing

    [ Technically I can get to the actual media locations from the playlist, but the main problem with this playlist format is that it distributes the content in chunks, not as one continuous stream of data - this is not supported right now.
    Moreover in this case the media (your link is down now, but I was able to download and test it meanwhile) is in .ts transport format, which FMOD can't play, too. ]

    I would recommend switching to ice/shoutcast or similar content producer - if possible.
    - you might be able to circumvent this to a degree by having another client which supports HLS/m3u8 streaming to emit or pass along decoded stream either to ice/shoutcast producer or to audio output directly.

    I will keep looking into general HLS support, at least for supported formats, but as this transport usually contains video content, universal support is probably very unlikely.

    I hope this helped somehow anyway - let me know if that's the case, and if it's not as well !
     
  29. intercodegames

    intercodegames

    Joined:
    Dec 1, 2014
    Posts:
    11

    Thanks r618.
    We want to use Wowza Cloud to stream audio for our app (made with Unity). It will stream for the app only, so can change how wowza send the data the way we want. Can you please advise how to configure Wowza Cloud to send the stream that will work with AudioStream?
     
  30. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    FMOD 1.10.09 is out - please wait until asset is updated if you encounter any errors, or download previous 1.10.08 until then

    Thank you !
     
  31. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    Just submitted 1.8.0.1:
    - updated for FMOD 1.10.09
    - AudioStreamDownload: added decoded_bytes and file_size for download 'progress' (removed playback_time)
    - AudioStreamDownload: added tags support
    - updated ambisonic demo scene/s:
    - GVRSoundfield allows custom amb file path to be entered and played
    - GVRSoundfield, GVRSource - fixed few bugs and automatic playback finishing when end of file is reached
    - updated README [AAC on iOS, also some contact information, and few corrections]

    Hope this mainly helps people who wants to test playback of ambisonic files having more than 8-channels which Unity currently can't properly import.
     
  32. ramon_delmondo

    ramon_delmondo

    Joined:
    Aug 19, 2015
    Posts:
    22
    Hi @r618

    The stream was working perfectly, I didn't change anything but now I got this error in Xcode:

    AudioStream.AudioStreamSupport:LOG(LogLevel, LogLevel, String, EventWithStringStringParameter, String, Object[])

    AudioStream.AudioStreamSupport:ERRCHECK(RESULT, LogLevel, String, EventWithStringStringParameter, String, Boolean)

    AudioStream.AudioStream:NetworkLoop()


    (Filename: /Users/builduser/buildslave/unity/build/artifacts/generated/common/runtime/DebugBindings.gen.cpp Line: 51)



    AudioStream [ERROR][2018-10-23T22:01:28] Network sound.readData ERR_INTERNAL - An error occurred that wasn't supposed to. Contact support.


    The stream works in android and editor.
    Can you help me please?
     
  33. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    make sure ‘Stream type’ matches the stream format
    post link to your stream here or in PM - I’ll test it on iOS
     
  34. ramon_delmondo

    ramon_delmondo

    Joined:
    Aug 19, 2015
    Posts:
    22
    Thank you @r618
    I changed stream type and now it's working again.
     
    r618 likes this.
  35. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    Great !
    - iOS is more picky with Autodetect than standalones and Android - for reasons unknown
    glad it's working!
     
  36. mariopinto_appgeneration

    mariopinto_appgeneration

    Joined:
    Apr 17, 2018
    Posts:
    2
    Hi!

    I'm having a problem with the AudioStream demo.
    I was testing and I've got an error saying "The specified resource requires authentication or is forbidden."
    How can I add information to the request's header? Can you help me with this issue please?

    Best regards,
    MP
     
  37. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    Hi @mariopinto_appgeneration , authentication is currently not properly exposed, if possible PM me please url + credentials, if possible, I'll verify and let you know what can be done about it

    Thank you !
     
  38. intercodegames

    intercodegames

    Joined:
    Dec 1, 2014
    Posts:
    11
    Hello r618,

    We need the streaming audio to play on background, when app is not focused or sleeping.
    We've have followed the instructions on the README file, and it works for iOS, but not for Android.
    You've said that to do it properly on Android we should build a service. But how would that work? The Android service would play the audio using AudioStream, or the service should use native Android functions do play the audio?
     
  39. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    Hi, yea that is a problem; service would need to do the whole streaming thing in order to keep audio being played in the background
    These things changed over time, the general idea at the time being that you'd drive the service by foreground (Unity) application -
    - you can use fmod sdk for android for service itself too

    btw did you follow the instructions for Android properly and disabled e.g. onPause method ? If that didn't work it's most likely android OS specific :/

    This part is not implemented fully in AudioStream (sorry!)
     
  40. Musiken

    Musiken

    Joined:
    Sep 29, 2016
    Posts:
    2
    Hi,

    I just bought AudioStream because it works with Resonance and allows for peer to peer voice chat locally with UNET. What I want to do is to use my microphone connected to an audio interface as an input for the audiosource and be able to use the alpha/sharpness with Resonance. That would be what the other person would hear.
    I've read the readme and installed FMOD and Resonance, but could you elaborate on how to do that? (I'm a beginner, sorry)

    I'm also having an issue where I can't choose my audio interface's input. When I click on it "An error occurred trying to initialize the recording device" in the input demo. I can use my Vive microphone instead and hear it through my speakers connected to the audio interface (if I unclick mute output) or any other output. But I really need the input from my audio interface to work. I've tried using different outputs like the Vive speakers with the input from the audio interface, it doesn't work either. It's a Focusrite 2i2, only one mic is connected to one input. What could be the issue here?

    Any insight would be helpful,

    Kind regards,
    Musiken
     
  41. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    Hi @Musiken, no worries, those are legitimate questions
    1] the resonance setup: I've followed current example in resonance package, all that's needed is you add AudioStreamInput2D on game object you want to be input source and configure AudioSource on it, something like this:

    Input2D+Resonance.PNG

    notice AudioSource output is directed to resonance mixer, and Spatialization is enabled
    Note: I had to restart Unity for mixer to work afterwards for some reason
    (it's exactly the same setup as in the sample scene with cube only with added AudioStreamInput2D (AudioStreamInput should be working too btw))

    2] if you need to send data over network the only solution which currently works you attach your UNETSource on the game object with ResonanceAudioListener (typically main camera)
    - this but means that the whole scene audio will be unfortunately covered
    There's currently not an easy way to get to mixer (resonance) buffer without further work
    Let me know if this works for you


    send me please editor log when the error occurs
    (accessible e.g. via 'cog' icon in the upper right corner of the console)

    is the recording device present among other recording devices in the system when you connect it ( sound properties in control panel on windows / system preferences on mac ) ?
    it looks like it's an USB device right ? - that might be potentially a problem - if e.g. drivers update won't help, I'm not sure if I can do something about it unfortunately /
     
  42. Musiken

    Musiken

    Joined:
    Sep 29, 2016
    Posts:
    2
    Hey,

    Thank you for getting back to me so quickly, I'll get right on it. If I understand correctly I can transfer audio (voip) but it will cut off any other sound in the scene (like a video)? It would be less useful than expected, but still useful.

    Issue with input
    I reinstalled the audio driver of my interface, now the input works for all outputs except the audio interface, which results in the error. I've included the editor log (editor.zip). I tested the input/output in Discord and Teamspeak, it also seems to only allow one or the other, but not at the same time. It works in Ableton though. So I guess it's an issue outside of AudioStream.

    Issue with latency
    If I use the HTC Vive as output and audio interface as input, I get about 1 second of latency, on the "best latency" setting. The settings tells me the input mixer latency is about 43ms on the automatic DSP buffer size, and it gets to 1ms if unchecked. I can also use the HTC Vive as input and output and I'll get the same latency of 1 second between the time I make a sound and the time I actually hear it. I've also included a log if that helps. (editor2.zip).
     

    Attached Files:

  43. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    Hi,
    no, it's the other way around: the whole final mix will be transferred: user input + any audio being played in the scene - i.e. everything what you hear normally from scene

    one more program to check please - skype
    - try also how latecy of inputs looks there with required device/mic
    I'll send you a PM what in sources is worth to modify to test if it can be opened

    see above, but 1 sec is rather much - as above please verify how the roundtrip looks like when in e.g. skype (if selecting vive interfaces in it will work) - it might boil down to vive drivers i'm afraid, in which case there's probably nothing I can do
     
  44. wazapen

    wazapen

    Joined:
    Oct 30, 2018
    Posts:
    6
    Hi I have a doubt, so it's not possible to play a streaming audio when you go out of the app or lock the phone on an Android app?
     
  45. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    Hi, it seems to vary per Android OS version, unfortunately
    I'll try to build demo apk using current i.e. 'onPause method' gradle build ... method tomorrow so you can test on your device
     
    Last edited: Oct 30, 2018
    wazapen likes this.
  46. wazapen

    wazapen

    Joined:
    Oct 30, 2018
    Posts:
    6
    Thanks for the quick answer :D I appreciate your help.

    I forgot to mention, what I intend to do is playing audio from a live streaming URL, like a radio app, "just that", It would be great if there is some way to do it.
     
  47. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    @wazapen
    There are two different components/players in AudioStreamDemo demo scene: AudioStream and AudioStreamMinimal which behave differently when the app is in background - AudioStreamMinimal seems like it does not mind (i.e. it's streaming when phone is locked/app is in background) and you can verify this even now with the demo apk (link is in the description on the store page)

    But I went ahead and built testing build w onPause removed in unity activity (this will have no impact on AudioStreamMinimal IMHO, but anyway): https://www.dropbox.com/s/770s2rgksmo7fup/AudioStreamDemo-debug.zip?dl=1

    On Android 5.1
    - AudioStreamMinimal works 'as expected'
    - AudioStream works in the second build (but since updates in the background are not guaranteed the playback is not smooth on my older lowend android phone)
    (And it's hard to tell how stable this might be since Android probably too can deny application resources when it's suspended)

    You can test this on your device; if neither of the players in the demo scene will work I'd have to consider implementing the service - which ETA I'm currently not sure about -
     
  48. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    Since support for .NET 4.x runtime became rather non experimental (and is default starting from 2018.3 (I think)) I'd like to drop support for 3.5 runtime which would allow me to include and use 4.x only libraries and simplify maintenance
    That would mean dropping support for Unity 5.x completely with submissions to asset store done probably with LTS of 2017 for now.
    I added a poll - go ahead and vote as you see fit! I will leave it there for some time from now
     
  49. wazapen

    wazapen

    Joined:
    Oct 30, 2018
    Posts:
    6
    Thanks for the answer, I'm trying to get a proper URL for the streaming, with that this would work perfectly, the AudioStreamMinimal do what I need
     
    r618 likes this.
  50. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
    Ok, thanks for testing - good to know