A Unity ID allows you to buy and/or subscribe to Unity products and services, shop in the Asset Store and participate
in the Unity community.
Discussion in 'Assets and Asset Store' started by r618, Jun 19, 2016.
1.7.7 update with the above changes is live (on both stores)
just submitted 1.7.8 with new component for non realtime downloading and automatic AudioClip construction from network audio data:
v 1.7.8 052018
- new component AudioStreamDownload: allows downloading of file/stream faster than realtime and to play automatically created AudioClip afterwards
- AudioStreamDownloadDemo scene
- playback of very short clips was fixed in connection to this
- less confusing logging for (end of) playback
builds with new demo scene should be already online
(have a look at it @WereVarg and let me know if there's anything still missing)
Noob question, and sorry if it was asked before, but can I use this plugin for realtime audio voice chat?
Thanks in advance!
hi, not a noob question at all !
the answer is while it is possible, it's not something setup out of the box, and would be currently possible only between known IP end points directly ( this usually means it'd be restricted to local networks )
- technically you would have to know each player IP in advance and then have each start its own Source (with desired input/microphone), and Client to connect to other instance
So, nothing automatic unfortunately
It's also based on UNET currently which has major limitation in that it can't be run off of main thread and thus being framerate (or fixed framerate dependent) - which is not exactly very friendly to lightweight realtime streaming
I'm in the process of adding better networking library over time and figuring out if it'd possible to use e.g. unet relays for internet usage, but this won't be ready for some time
I bought AudioStream in purpose for band app which I am developing.
I playing whole track of samples and can do tweaks like small offsets or go to different part of song.
I playing samples track (1audio source) on one device and on other i playing samples track and metronome (2audio sources) for drummer but there is latency in devices. Is there way to make it more sync?
Edit: Everything seems to work right when I set "Best Latency" in audio settings in project, but after that sometimes there are cracklings of buffer overload I guess? And I can't let this go
Hi, I don't quite understand your setup - are you using audio sources over local network ?
Best Latency is in general preferable option, yes - if you encounter distortions in output, try changing DSP buffer size and count (i.e. increase both) - it is heavy device dependent as I found out.
DSP buffer sizes are provided only on some components, but I don't know which one you're trying to use..
I have just 3 audio sources, each using AudioSourceOutputDevice component.
Two of them using one device, and third one using second device.
(I duplicating one audioSource so same thing can go to both audio devices - maybe Library offer better solution?)
I will check changing dsp size tomorrow, no left time today :
You can use one AudioSourceOutputDevice to output to other than default, and to default simultaneously if you leave 'Mute After Routing' unchecked
( but if outputs are both different than default you have to use more than one component - correct. You can 'optimize' to a degree by changing your system default, if possible)
Some latency is but unavoidable, unfortunately, yes
Main latency is not a problem for me, but I want keep sound in both devices in sync, also there is desynchronization even when two audio sources sending signal to the same device (to default too but without components it's synced).
Edit: My fault, it wasn't synced because I missed one audioSource (some time ago I edited some logic and forgot about that) and it didn't have AudioSourceOutputDevice component.
@Nirvan From my testing you will get desync only when the two output devices differ in 'speaker mode' - i.e. effective no. of output channels.
- I don't have good solution right now.
I had two game objects in testing scene with an AudioSource and redirection on each, with the same audio clip on both, each AudioSourceOutputDevice assigned its own output device id -
when AudioSources were being played at the same time, they started and played on the same output, or on each own output id, in sync - with the above exception.
For best results I recommend setting 'Best Latency' in AudioManager, testing in build (i.e. not in editor), build with .NET 4.x runtime and set import setting on clip/s to Decompress On Load, compression format PCM (or possibly ADPCM), keep DSP sizes/counts as low as possible.
I've optimized few things, but will be looking into it more
Btw I've sent you a PM / let me know if you want to look at testing scene. I don't have ETA for the above though right now /
Thanks! I just edited my post seconds before your post, I got this, but new tips with compillation settings etc. will be helpful too, maybe it will allow me to use 'Best Latency' because for now i had to choose 'Good Latency' to avoid distortions.
Just started using AudioStream in a project I'm working on, absolutely love it!
One question though, I'm using the AudioStreamInput to grab the system audio.
That works great, however I noticed something a bit odd that I would really love to get fixed.
Let's say I start up my project in the editor, and then go over and start a video playing on youtube, go back over to my app and I can see I'm getting data, then I go back out and pause the youtube video.
For a short time, say half a second or so I can hear a clip of the audio still playing and fading out.
Just to demonstrate this to myself, if you throw a line drawer on the scene setup to render the FFT output of GetSpectrumData, the spectrum looks good, up until the outside audio source is paused or stopped.
The data the audio source is getting just keeps on looping and rendering data.
So, is there any way to sample the system audio without actually playing it a second time through unity? While still being able to call GetSpectrum and GetAudio?
Anything I have tried so far has completely broken it rather than fixing that...
Any suggestions or thoughs?
I meet a ERROR that is "AudioStreamMinimal sound.getOpenState ERR_NET_SOCKET_ERROR - A socket error occurred. This is a catch-all for socket-related errors not listed elsewhere."
if use demo url.it work well.but use customize url it doesn't work.
customize url example:http://ys-j.ys168.com/601753847/TLRJW8m4N146W55IUPIJ/h.mp3
ERROR look like happend on FMOD.I meet this Error two month ago,and i forget how to fix.do you have any solution?
If main latency is not a problem for you I recommend using other than 'Best Latency' setting until next version is out
I've discovered fmod dsp buffer limits when testing it out now - trying to figure out how to deal with it properly
In the meantime you can experiment with higher DSP buffer size and count with Best Latency setting if needed - such as 1024 / 4 - it should help on desktops.
Hi @aBs0lut3_z3r0 , thank you!
The fading out is probably caused by AudioStreamInput which has high latency due to being routed via Unity audio system - can you try AudioStreamInput2D component too (if you're not using it already) ?
but I'll look into both next
@orangetech the url points to a HTML, not audio content
when downloaded this is what's in 'h.mp3' file:
<head><meta http-equiv="Content-Type" content="text/html; charset=utf-8" /><title>
I don't speak chinese, but it looks like the link expired and file was taken down by the service provider
generate new url: http://csk9522.ys168.com/
change another server still not work.example url: https://od.lk/s/NV8xMTE3MDg1MjNf/Hello.mp3
FMOD version is:10908
There are two problems:
- original URL is a redirect to: https://web.opendrive.com/api/v1/download/file.json/NV8xMTE3MDg1MjNf?inline=1
- even with redirected URL directly entered as url to AudioStream the file cannot be found since the server responds not with entered filename (NV8xMTE3MDg1MjNf?inline=1), but with 'Hello.mp3' to be saved - which is apparently a problem for FMOD
Both of those issues even separately is a problem for FMOD, unfortunately
(I tested with today's release 1.10.06)
Unless you have direct resource/file URL - i.e. without any server software API interfering - I'm afraid there's not much we can do - in other words you need direct link to 'Hello.mp3' file
(network protocols are not a FMOD's strong points, unfortunately;
another option for now would be to try to download the file separately via UnityWebRequest (I don't know how reliable that'd be in this case) and pass its filesystem path for playback once that's done)
I'm sorry I don't have a better solution for this right now
Thanks reply.bordcaset stream is good enough.hope to creat local player.
Hi there!!! i have a question about your plugin. Didn't read all of the messages on 7 pages, so sry if i'll ask something that copies someone's question. Do your plugin work with audio streaming just like unity videoplayer streams video? I mean on ios i can pause/unfocus my app and after i unpause/focus it again streaming goes from correct position and i can adjust playback time whenever i want?(talking about URL streaming)
yes - this was its primary purpose initially
I'm not sure how network connection behaves when in the background on iOS - it might be left open if you have correct background mode for application set
but in any case when AudioStream is paused (must be called explicitly), no network data is being read from the network so it might very well just time out - I'm not sure since I've never tested this
If it's not paused explicitly Unity player is paused nevertheless, and so is very likely audio thread on which network is read, so again it would depend whether iOS keeps network connection open or not when the app is resumed - audio should continue, but possibly not exactly where left off (again, I haven't tested this )
Chances are if the pause is brief it would resume properly, and if it's longer it would be just disconnected.
(the same would apply to VideoPlayer IMHO - although I might be wrong since Ive never used it this way/on iOS)
Using non Unity audio with AudioStream with AudioStreamMinimal will be the same (with the difference that the whole streaming/network connection is managed by fmod)
AudioStream does not allow clip access when in streaming mode, so seeking in the stream is not possible, unfortunately.
However - audio can be streamed/downloaded faster than realtime using AudioStreamDownload component and then be accessed directly via automatically created AudioClip
You can check all functionality in demo builds which links are on the asset store page
Hope this helps!
Finally today I tested my app with band using AudioStreamer, but I noticed that sound is sended without effects from audio mixer busses, probably it's impossible to archieve, but also volume is little lower, is there way to make sended volume to the same volume level like in default device?
Hi @Nirvan - you are correct, AudioSourceOutputDevice does not work with AudioMixer unfortunately (full compatibility would require separate set of native plugins for each platform and would mean that existing user scripting access to AudioSources functionality wouldn't be possible as it is now)
- you can attach audio effects directly on game object with AudioSourceOutputDevice though ( just make sure that the effect component are *above* the redirection one (otherwise the effected buffer would not get picked up by it)) - audio effects are in Component -> Audio menu
- or you can attach AudioSourceOutputDevice at the final mix only after all AudioMixer processing already took place - at main AudioListener (usually on main camera) - this might not be useful to you though since *all* application audio would be redirected this way, not only separate AudioSource
I understand the above is not ideal, but I hope you can workaround at least to a degree for a cost of some work with audio effects this way
The AudioMixer + audio buffers accessible on each AudioSource separately (pcm callbacks) too situation is a bit unfortunate right now for sure, ideally Unity would support this in some fashion e.g. something like directing each mixer subgroup to a separate device (but what about final mix then? what about main audio listener? etc etc)
this usually happens when devices have different number of channels, so my advice would be to try to sync them, *IF* possible at all
otherwise without something like compressor I'm afraid there's little that can be done automatically, so best for now probably bump up the volume on originating AudioSource manually (via script)
Hope that helps, let me know if it makes sense !
@aBs0lut3_z3r0 I just tested both components (AudioStreamInput and AudioStreamInput2D) and they both work as expected - i.e. they reflect incoming audio data properly (when their respective source is paused/silenced they output 0 and resume when source is unpaused/played again)
AudioStreamInput has higher latency as mentioned before.
The thing which is probably happening you don't have 'Run In Background' set (in Player Settings)
(audio thread is paused when Unity loses focus (when testing in Editor))
just submitted v 1.7.9 062018
- optimized audio buffer conversions throughout AudioSourceOutputDevice and AudioStreamInput* components (they are much faster and this means the former can be run even on mobiles now)
- tested Best latency setting with the above (you need ASIO drivers on Windows still)
- Added automatic DSP buffers option to components which allow DSP buffer customization
- prevent crashing in demo scenes which allow custom DSP buffer sizes on non Windows platforms
- updated demo scenes with some more pleasant testing audio
- fix for float->PCM16 conversion in GOAudioSaveToFile and IcecastSource components
- tested w FMOD 1.10.06
note about 'Best latency setting and AudioSourceOutputDevice' on Windows - as I found out it's probably impossible to guarantee there's enough audio for FMOD to pick up with any combination of its system's DSP buffer lenght and count - the audio buffer produced by Unity with in this setting might be often too small to satisfy FMOD pcm callback on given sw+hw combination
ASIO drivers help on Windows though - which has downside that I couldn't come up with proper way to configure it in such a way that all output devices are accessible separately in ASIO properties - so for now direct ASIO support is not enabled in AudioStream (can be initialized rather easily with one FMOD call though - will see what to do with it maybe later)
- all other Unity latency settings work as expected, so if you e.g. don't use @Nirvan 'Best latency' you should be fine; and overall optimizations mentioned above should help, too.
(that is if submission confirmation email arrives - which didn't happen yet, hoping for the review to make it this time too -)
I recently purchased the AudioStream package.
The only functionality I require from it is the ability to change the audio output device during runtime. I would like to remove all unnecessary assets and code from the package that are not required for that feature.
Could you please tell me which are the minimum required assets I need to keep? I’d like to maintain the project’s cleanness for ease of debugging/use and having a lot of unused assets and code there is a bit of a risk.
Hi @CptDustmite ,
see this post (and picture) - https://forum.unity.com/threads/aud...all-and-everywhere.412029/page-4#post-3292783 -
that's for FMOD integrations package
as for AudioStream itself well, you can press delete on Demo folder (but I would strongly recommend to keep OutputDevice demo/s to check if things are working - you'd have to modify e.g. OutputDeviceDemo.cs to not reference AudioStream and AudioStreamMinimal in the scene - I would strongly advice to do it though)
- Plugins (with two sources for iOS if you don't deploy to that)
- you can delete the other custom editor
- you have to keep _MainScene sources (this is a wrong dependency - I will have to remove it - it's for build settings info displayed in demo scenes)
- keep sample sound
- and from Scripts delete all except AudioSourceOutputDevice folder and AudioStreamSupport.cs source, but also keep BasicBuffer.cs which is in Network ( I will have to place it in more preoper place, too )
You should end up with something like this in AudioStream folder ( that's for current 1.7.8 on the store):
│ ├── OutputDevice
│ │ ├── MultichannelOutputDemo.cs
│ │ ├── MultichannelOutputDemo.unity
│ │ ├── OutputDeviceDemo.cs
│ │ ├── OutputDeviceDemo.unity
│ ├── UnitySpatializer
│ │ ├── sine.aiff
│ └── _MainScene
│ ├── BuildSettings.asset
│ ├── BuildSettings.cs
│ └── MainScene.cs
│ └── AudioSourceOutputDeviceEditor.cs
│ ├── AudioSourceOutputDevice.cs
│ ├── PCMCallbackBuffer.cs
│ ├── AudioStreamSupport.cs
You can run the output device demo scenes with this with everything else gone
Hope this helps! Let me know if there's anything else
The effects I wanted to use are in mixers, so I must say bye bye to them :[
I will only use windows, you said about integration to all platforms, maybe just for windows it could be possible to do?
I would try but there probably will be too much work and knowledge
To my second question, I would try sync channel numbers and see, all sources are to the max but I need even more loudness
Man you're killing me :] - I'll see what I can do, but it'll take some time since for the next week or so I'll be preoccupied with other stuff
.. it certainly makes sense to tell it right on the mixer on which device to go - the signal - for all intents and purposes - just disappears from master mix though - so not sure how that'd go with the rest of the system - but i'll have a look
this thanks to quick review is live'n'ready
A few weeks I updated my copy of audiostream only to find it introduced some *huge* performance spikes. I tried updating again to the new version of Friday but this hasn't helped the problem. Here is what the profiler shows:
These spikes are regular and causing big problems. In my project I have 3 audiostreams of internet radio stations running at once. This didn't cause any issues until I updated a few weeks ago. Any chance you can investigate what the issue may be? Thanks.
Hello @JonDadley !
That looks grim indeed, but I have a suspicion that it's in initial network connection attempt - can you send me URLs used for the three AudioStreams ? (you can PM them if necessary)
Did you update Unity as well ? On which version is this running ?
If you're on Windows I think I know which issue you hit (and updating fmod to latest won't help as well)
Let me know the URLs nevertheless, please; thanks !
Ok, found the culprit (or two in fact) -
- just to verify: would it be possible @JonDadley to comment out the whole 'sound.getTag' while block in AudioStreamBase.cs (around line 680) ? - to verify if skipping the whole tag retrieval functionality would make major spikes go away - thank you !
As for the second one - I overthought some data containers for pcm callback a bit it seems - if the above won't help immediately completely, I'll send you an update once I tidy up the sources properly;
Of course tags retrieval should work for an internet radio anyway - I'll update you shortly once I have update ready - this just for confirmation it is indeed the problem.
Thank you very much for reporting this once more - and sorry for the inconvenience ! / this is a bad interplay between the newly added background thread and fmod api - I'll test it properly and add user interval for this if it won't work automatically /
@r618 Thanks for the fast response! I really appreciate you taking a look at the issue so promptly
I commented out the whole sound.getTag while block but unfortunately it didn't seem to make the spikes go away. Hopefully your data container rework will fix up the issue
Hi there r618, I have just started tinkering with AudioStream. I'm finding that if you call AudioStreamBase.Play() too early (i.e. within the Start() method of any monobehaviour which is initalized on scene load) you get the error: Exception: system.setStreamBufferSize ERR_INVALID_PARAM - An invalid parameter was passed to this function.
Does AudioStreamBase provide some way for you to know when you can and can't call the Play() method without causing an error?
Hello @Allthebees - just check public 'ready' flag before playing
- the only demo scene where its usage is not shown is, ironically, for AudioStream itself - sorry about that ! )
You can refer to e.g. input demo/s to see how its synced in Start coroutine, or just check whenever needed.
Let me know if this makes sense, or you need any other advice ~
Great thanks for that, got it working!
Appreciate the help.
Thanks for the quick response! Solved all my issues.
Might be a good idea to make the AudioStreamBase.ready description a bit longer so other people don't make the same mistake as me
Yeah, you're right - I was kinda hoping people would use demo scenes as primary reference, but it's even misleading in this case
I will have to tidy up demo scenes anyway, based also on what CptDustmite wanted to do, so I will include this for sure
Thanks for feedback !
the 184.108.40.206 bugfix for performance problem in AudioStream component mentioned above by @JonDadley is on its way and should be live hopefully soon
I am getting back to this again finally. Do we need to put the videos inside of the StreamingAssets folder or can we use them with the Unity "VideoPlayer" component? Thanks again!
Hi, no problem
for clarity, here's the whole setup:
- setup your video in the scene, with its output to AudioSource:
The 'AudioSource' game object:
- you setup your required audio output there on AudioSourceOutpuDevice and note the added helper script 'AudioClip4Video'; that's just a helper which creates the needed AudioClip :
public class AudioClip4Video : MonoBehaviour
var asource = this.GetComponent<AudioSource>();
var samplesLength = AudioSettings.outputSampleRate; // 1 sec worth of looping buffer
asource.clip = AudioClip.Create("", samplesLength, 2, AudioSettings.outputSampleRate, true, null);
asource.loop = true;
- when you play the video its audio is output to this AudioSource by VideoPlayer, where it can be picked up by AudioStream; so the video is completely independent and
it does not matter how VideoPlayer gets its video file for this (AudioStream)
- i.e. you can use VideoClip as Source for clips imported into project, or you can use URL source on VideoPlayer (and pass full file/http path for compatible videos directly)
Let me know if it helps!
(I've hardcoded 2 as number of channels in AudioClip creation in the script - that's usually enough, but of course you might want to reflect this to actual channels in the video)
It would be amazing if there was a way to capture audio from the output device on macOS.
Have you tried Soundflower or Loopback ? - https://rogueamoeba.com/freebies/soundflower/
( I haven't recently )
Soundflower currently lacks its GUI - Soundflowerbed - it seems, but there's whole proces of setting up a recording device mentioned on release page - https://github.com/mattingalls/Soundflower/releases/tag/2.0b2
.. meanwhile I had to fix stream playback with AudioStream component with Best latency Unity setting (there might have been occasional sample drops which might have become more apparent over longer time depending on platform/hw used) with more than millisecond precision and dynamically updated network timeouts
I've tested with more than 8h long continuous streaming on all platforms and the playback is much more resilient to small time fluctuations which results to more stable audio over longer times as well
Hopefully this will be a quality update until I get to the previously discussed
v 220.127.116.11 072018
- AudioStream - solved stream playback with Best latency AudioManager setting on all platforms tested with needed submillisecond resolution
(as a conseqeunce network buffer is drained more consistently at startup and during playback and is more resilient to stream/timing fluctuations as well)
- added playback time to playback components
^ this just submitted, should be live in the next few days
after an extremely quick submission approval this update is live
/ i'm really glad this is out
I've implemented basic audio plugin for windows which outputs mixer group signal to another device in the system (instead of passing it to the next/main mixer group)
I'm not entirely satisfied with it due to how latency seems to be working with the mixer right now, but anyway -
EDIT: solved, ready for mixering, yay
- if anybody is interested in testing this out ( @Nirvan ), PM me please for dl link and instructions
Thanks for your previous help.
I am using AudioStream just for its audio output device switching, as mentioned earlier.
I used to have this code that basically finds all audio sources in the scene and then adds AudioOutputDevice component onto each of them, and switches output device.
With that method I get memory error in the console:
Exception: FMOD.Factory.System_Create ERR_MEMORY - Not enough memory or resources.
I fixed this by instead of finding all AudioStreams, I individually select a few that I need on other output device.
This is a bit tedious though in case I forget one.
Is there any way I can have it change output device for several audio sources or is that going to cause memory error every time?