Search Unity

  1. Good news ✨ We have more Unite Now videos available for you to watch on-demand! Come check them out and ask our experts any questions!
    Dismiss Notice
  2. Ever participated in one our Game Jams? Want pointers on your project? Our Evangelists will be available on Friday to give feedback. Come share your games with us!
    Dismiss Notice

WIP: New Audio System

Discussion in 'Works In Progress' started by PhobicGunner, Feb 23, 2016.

  1. PhobicGunner

    PhobicGunner

    Joined:
    Jun 28, 2011
    Posts:
    1,813
    So, everyone knows Unity's audio system is pretty basic. Even with the new Audio Mixing system, it still requires a quite frankly shocking amount of code to get things working. I've heard stories of third party sound design studios refusing to touch it without the aid of third party plugins.

    Thing is, I think even with the third party plugins available audio workflow in Unity still isn't quite where it should be. So I've set about creating a new audio plugin for Unity which:
    1. Was designed to be not just easy to use for a sound designer, but maybe more importantly familiar. It should borrow conventions from other external tools they may be used to using. In particular, inspirations came from Audition, FL Studio, and FMOD Studio.
    2. Integrates as closely as possible with Unity's existing audio system. Doing this will allow audio programmers to take the system even farther or integrate it with pre-existing solutions (such as binaural audio plugins).
    So far, this is the solution I have come up with.
    My audio system groups audio clips into units of "Audio Event". Each Audio Event has a timeline and a number of audio tracks. You can drag audio clips from the project view directly onto the timeline and position them. There's a number of editing features here such as ripple insert (insert and shift everything to the right), trimming audio clips, ripple delete (delete and shift everything to fill the gap), etc. Each track has independent volume controls, mute, and solo buttons.
    There's also a neat feature called Loop Regions. You can right click any clip on the timeline and select "Mark Loop Region". This sets the loop start to be the beginning of the clip, and the loop end to the end of the clip (these can be fine-tuned afterwards from the inspector). For example, this is very useful for looping music with intro and outro sections. It can also be used for, as shown in the video I'm posting here, automatic gunfire just as an example. Playing a sound on a timer usually sounds somewhat "off", which is why some games actually just loop a sound. Previously impossible without completely custom programming, but this system will make it easy for a sound designer to do exactly that.
    This is based on a completely custom-written playback system on top of Unity's audio API. It allows you to treat an AudioEvent as a single unit, instead of disparate audio clips.

    This is a video of what I have at the moment.



    Keep in mind this is definitely unfinished. There's many features I still need to add to the editor, and also many more core features I want to make available. For example, I plan on including a system of tying controls like pitch and volume to user-definable parameters, somewhat akin to providing parameters to an Animation Controller. This will give the sound designer even more power to prototype a sound's functionality without needing a programmer's help.
    I will also be adding functionality to play Audio Events from code, using an object pool to reduce memory.

    EDIT: Also, by the way, one thing I want to support is being able to use this system to do things like, for example, put a regular gun sound on one track and a "distant" gun sound on another track, then crossfade between the tracks with distance. Or, as another example, put different variations of a sound for different environments and switch between tracks based on where the source is located. The goal will be to allow the sound designer to create these behaviors, and then all the programmer has to do is play the audio event and supply the parameters.

    EDIT 2: Actually, one change I'm going to be looking at implementing is allowing different types of clips to be added to the timeline. That is, there's just plain audio clips as it supports now, or perhaps Multi Sounds, which would be a container for several audio clips that are randomly selected or perhaps allow the selection to be tied to the parameter, and maybe even allow other audio events to be nested as "clips" on the timeline.
     
    Last edited: Feb 23, 2016
    Peter77 likes this.
  2. PhobicGunner

    PhobicGunner

    Joined:
    Jun 28, 2011
    Posts:
    1,813
    Lots of refactoring, and a few new features added.



    There are now three clip types:
    • Single Clip is just a single audio clip, as the name implies.
    • Multi Clip is a container of several audio clips. One is selected at random every time it's played. I plan to expand this with configurable random behavior as well as being able to tie it to event parameters.
    • Event Clip is a really neat one, it allows you to embed another Audio Event on the timeline as if it were a clip. That event can have it's own entire timeline and settings, and could even have its own Event Clips. Turtles all the way down... :cool:
    My next task is likely to finish off the missing functionality of the editor window (such as the lack of scrolling). Then I'll tackle event parameters, which is where things should really get interesting... :)
     
  3. PhobicGunner

    PhobicGunner

    Joined:
    Jun 28, 2011
    Posts:
    1,813
    Not much to report for today, other than a lot of work went into performance optimization with some pretty nice gains :)
    Tested with 40 looped sources playing a footstep sound (with a Multi Clip), and a source playing looped music. Previously, this caused a lot of glitching and repeat audio due to exceeding the maximum time. So I had to drastically refactor things, but it paid off and now there's no glitching or repeating.
    So, there's that :)
     
    zyzyx and NickHaldon like this.
  4. PhobicGunner

    PhobicGunner

    Joined:
    Jun 28, 2011
    Posts:
    1,813
    New update!
    Aside from a number of editor improvements, I've now partially implemented parameters and automation features! :D
    You can add float, int, and boolean values to an audio event. These values can be set from game code and can be used to control playback of the audio event. Some things can be tied directly to these parameters, such as pitch and volume. You can also use an integer parameter to control clip selection in a Multi Clip instead of the default random selection.
    Clips also now have conditions. You can pick which values to check, the comparison, and what value to compare against. If all conditions of a clip are met, that clip is played. Otherwise, it will remain silent.

    In this video I show a few examples of using clip conditions. I also show off controlling the event's pitch using a "_Timescale" float parameter which is set to Time.timeScale every frame (I'm going to add direct support for timescale stretching later without the need for parameters, but for now it made for a neat demo).

     
    zyzyx and manpower13 like this.
  5. ng93

    ng93

    Joined:
    Aug 31, 2012
    Posts:
    40
    This looks awesome! Seems like your using the standard AudioSource component to play the clips, so I guess this should be compatible with Unity's audio mixer? What about multi platform support?

    (Also, probably too early, but any idea of a release date?)
     
  6. PhobicGunner

    PhobicGunner

    Joined:
    Jun 28, 2011
    Posts:
    1,813
    Don't really have a release date yet. Still got a number of bugs and quirks to iron out, plus extra features to implement :)
    Yes, the idea is that actually it generates audio data on the fly and plays it through an audio source (it's implemented as a custom audio filter). Except for a few edge cases, it is nearly 100% compatible with Unity's own audio system (and actually I'll probably be filing a bug with Unity about some of those, for example it turns out reverb zones don't work with custom audio filters - although reverb *filters* do work fine).
    It's all implemented in pure managed C#, so it should work on every platform.
     
  7. PhobicGunner

    PhobicGunner

    Joined:
    Jun 28, 2011
    Posts:
    1,813
    OK, so new update today. No vid because I'm lazy and there's not really much to actually show :p

    For one, I've added a lot more undo support to the editor window. Not being able to undo things was pretty irritating, not to mention against Asset Store standards ;)
    But, the more important one, was changing how automation works. Now, instead of getting a dropdown, you just get a textfield in which you can type an expression.
    Even though the old way was kind of nice and user friendly, it was still very limited in terms of what it could actually do. So now, in exchange for a little bit of user friendliness, you get a hell of a lot more power and functionality.
    The actual expression can be treated like a math expression (for example, you might do "1.0 - parameterNameHere" or something), but actually it's full-blown JavaScript (standard JavaScript, not UnityScript), based on the expression parser by StagPoint Consulting (used with permission). So you can do anything in these expression fields that you can do in full JavaScript.
    There's a number of functions available to these expressions. Aside from a bunch of standard math functions (sin, cos, tan, abs, etc), there's currently two custom ones "random" and "randomInt". These are actually special in that they return a random value between a specified range which only changes when the audio event is played or loops. This lets you do things like randomly vary the pitch of the audio event as it's played. For example, I defined PitchMin and PitchMax parameters and typed this into the Pitch Automation expression field:

    Code (csharp):
    1.  
    2. random( PitchMin, PitchMax )
    3.  
    So that the audio event randomly varies pitch between those two parameters every time it's played or when it loops :)

    Or, as another example, you could have a value that crossfades between two tracks, by giving each a volume automation expression, like so:

    Code (csharp):
    1.  
    2. [On Track 1:]
    3. CrossfadeParameter
    4.  
    5. [On Track 2:]
    6. 1.0 - CrossfadeParameter
    7.  
    Whereas before you'd have to have two parameters and set both from script.

    The hope here is that, even though it may be a bit more involved, it should enable the sound designer to have much more control over the behavior of the sound without having to involve a programmer.
     
  8. PhobicGunner

    PhobicGunner

    Joined:
    Jun 28, 2011
    Posts:
    1,813
    New video update today :)
    Now, not only can you preview audio events directly in the scene by enabling audio preview in the scene view (just like standard audio sources), but you can now also preview audio events in the audio editor window using Play / Pause buttons.

    On the downside speaker pan has been broken a bit, for the sake of fixing compatibility with reverb zones. I'll have to introduce my own panning code rather than using Unity's, but that's OK. In the end I think it will be better that way.

     
    zyzyx and manpower13 like this.
  9. PhobicGunner

    PhobicGunner

    Joined:
    Jun 28, 2011
    Posts:
    1,813
    Want to report a new feature I'm adding. No videos, since there isn't much to show ;)

    Basically, so far I've shown playing audio events using the AudioEventPlayer component. I did some refactoring so that there was an AudioEventPlayerBase class that AudioEventPlayer inherits from, and I am adding a new component StreamedAudioEventPlayer.

    Up till now, AudioEventPlayer and the system as a whole did not support streamed or compressed clips. You had to have your clips set to Decompress On Load, as that was the only way I could retrieve data via GetData. However, the purpose of StreamedAudioEventPlayer is to allocate multiple audio sources internally and use the PlayScheduled / SetScheduledStartTime / SetScheduledEndTime to play audio events. By doing this, I can now play streamed and compressed audio clips perfectly fine, and is meant for things like background music tracks. However, this does introduce several limitations:
    1. I cannot play nested audio events this way (or, well, I suppose I could, but it would be quite a lot of work to support it for what as far as I can tell isn't really necessary for things like background music or ambience).
    2. I can no longer have loop points in the middle of audio clips. Or, actually, you can do it but it will always play the clip from the beginning because you can't specify where to start the clip from in PlayScheduled.
    3. It allocates two audio sources per audio track. This is perfectly acceptable IMHO for things like music tracks, but would be way overkill for one-shot game sounds.
    My idea is that the system will try to handle these details for you as much as possible. It will detect when single or multi clips reference a streamed or compressed audio clip asset, and in that case will set a flag that tells the system to play it using the StreamedAudioEventPlayer instead of the regular AudioEventPlayer. It will also display a message in the inspector if the audio event is marked to play streamed samples, in case you accidentally used streamed or compressed samples in a one-shot sound effect that shouldn't be streamed.
     
  10. ShahidRaja

    ShahidRaja

    Joined:
    Mar 7, 2016
    Posts:
    1
    Hi PhobicGunner,
    I am developing a voice chat application , its seem like a group chat in public chat rooms. My development language is c# using WPF and WCF. I am bit confused , voice packet travel through server but not reached to other end. Please can you help me if i give you access to my project.

    Thanks
     
  11. PhobicGunner

    PhobicGunner

    Joined:
    Jun 28, 2011
    Posts:
    1,813
    Shahid, I'm not familiar enough with WPF or WCF to help with your problem. If I were you, I would post a question on StackOverflow for help.
     
  12. PhobicGunner

    PhobicGunner

    Joined:
    Jun 28, 2011
    Posts:
    1,813
    New video today.

    - Showing off support for streamed or compressed samples.
    - Added audio source-related properties in the audio event inspector.
    - Now the audio event editor window displays a notification if you try to play an audio event and no open scene view has audio preview enabled.
    - Finally got around to implementing audio fade in/out. It's a normalized 0-1 value for both. The regular non-streamed event player does this at the sample level, so you can actually do sub-frame fades (the streamed version does not unfortunately, since it does not handle audio at the sample level).

     
  13. PhobicGunner

    PhobicGunner

    Joined:
    Jun 28, 2011
    Posts:
    1,813
    It's just occurred to me that maybe people will want to attach filters to their audio events and tie those properties to parameters too (for instance, type an expression to map a 0-1 value to a range of values for lowpass filter cutoff or whatever). I will spend time thinking about how best to approach this issue because it would be a nice feature if implemented properly.

    EDIT: OK, so this may turn out to be a bit messier than I'd like, but here goes...
    I'm thinking of allowing Audio Events to have a slot for "filter chain". The filter chain would actually be a prefab in your project. This prefab contains a list of components which inherit from an AudioFilterBase class and define parameters (using a special AudioFilterParameter class that has its own property drawer and can be evaluated to a float value to allow for writing expressions that use audio event parameters). At runtime, when that audio event is played, it copies all components that inherit from AudioFilterBase from that filter chain prefab to the newly created audio event player object and calls an Init method (allowing it to compile any expressions and obtain a reference to the runtime copy of the playing audio event). I'll provide a few built-in filter components that mirror Unity's own provided filters (lowpass, highpass, chorus, echo, distortion, and reverb), which should serve as an example of how to write custom ones as well (or write a wrapper around a third party audio filter if necessary).
     
    Last edited: Mar 9, 2016
  14. PhobicGunner

    PhobicGunner

    Joined:
    Jun 28, 2011
    Posts:
    1,813
    Well, here's my first pass attempt at supporting filters, pretty much exactly as I outlined in the previous post:

     
  15. PhobicGunner

    PhobicGunner

    Joined:
    Jun 28, 2011
    Posts:
    1,813
    OK, well, now if anybody's reading and interested I'd like feedback on an important decision I'm now facing.

    So, I started this whole thing originally with the intent to write a completely custom audio engine. I believed this was the only way I could achieve the feature set I wanted.
    Later, I had to add a solution which went through Unity's own audio playback code anyway, so that I could support streamed samples. Just a couple of days ago, I added the option to force an audio event to play that way even if it contained no streamed samples, for a potential performance gain by going through native playback code instead of my managed code.
    And now, I realize that really there are few enough benefits to count on one hand with my custom playback code:
    1. It can allow for smoother fading if fade durations are very short.
    2. It does allow the beginning of the loop region to start in the middle of an audio clip.
    3. It was quite easy to allow for nested audio events on the timeline.
    And, actually, that first one could be addressed using a custom post-filter on the streamed event player to fading at the sample level.
    The downsides, however:
    1. I have two completely different playback systems now. There's lots of duplicate code there, and two totally different systems to maintain.
    2. You have to pick the correct one of the two systems to place in your scene to place static audio events. And, for example, if you change a non-streaming audio event to contain compressed or streamed samples, you have to delete and re-create those sources in the scene to use the other system. I feel like this forces you to know way too much about the internals of how sounds are played and creates extra work in some cases, when actually you shouldn't have to care at all.
    3. Pardon my french here, but spatialization in custom playback code is a complete pain in the ass. Unity should, but doesn't, handle panning for you (it only handles distance attenuation). I could write my own panning code, which would most probably break if somebody were using the new spatialization SDK feature, but I have no idea how I'm actually supposed to handle the subwoofer channel, and also have no idea how to support Prologic setups since I don't know how the encoding works (it's two channels, but somehow encodes 5.1 surround information). There's a hack where I feed an audio clip containing all 1.0 samples through and then read back the gain per sample but for some reason the values dramatically exceed the -1.0 .. 1.0 range, still doesn't really solve the subwoofer or Prologic issue, and then also completely breaks support for reverb zones.
    So, I'm considering dropping my custom playback code altogether and just going with the PlayScheduled / SetScheduledEndTime method. Even though it does allocate two audio sources per track, only one of those sources is ever playing anything at any given time (so the other is just sitting idle). It solves the issues I've outlined, and the only two downsides would be these:
    1. You could not start a loop point in the middle of a clip (it would play the clip from the beginning regardless of where the loop point started).
    2. I'd have to ditch support for nested audio events. I'm not strictly sure how useful this actually would be, quite frankly I threw it in just because I could.
    So, question is whether these two issues are dealbreakers, or if any of you interested would be willing to accept these limitations in exchange for me simplifying things down a bit.
     
  16. ng93

    ng93

    Joined:
    Aug 31, 2012
    Posts:
    40
    How is this coming along? Also, a bit late, but I'd be OK with you dropping the custom playback.
     
  17. PhobicGunner

    PhobicGunner

    Joined:
    Jun 28, 2011
    Posts:
    1,813
    Well, I decided to work on a short game jam project using the audio system (and then came down with a cold - oh what fun). Probably won't finish the game jam project but it turned out to be very helpful in discovering several bugs in the system.
    So I'll be ripping out the custom playback system and cleaning up everything else, fixing the bugs I discovered in the other project, and then working on promo materials for it.
     
  18. ReaktorDave

    ReaktorDave

    Joined:
    May 8, 2014
    Posts:
    115
    Hey PhobicGunner, any updates on this? Looks cool. Do you want to open source this?

    Concerning your problem with the spatialization: the last time I tested this, OnAudioFilterRead Inputs into an AudioSource worked with full spatialization support.
     
  19. PixelLifetime

    PixelLifetime

    Joined:
    Mar 30, 2017
    Posts:
    64
  20. ReaktorDave

    ReaktorDave

    Joined:
    May 8, 2014
    Posts:
    115
    @MakabreGaming Unity devs are working on a new audio system with very cool artist tools. They demoed a lot of these things at one of the recent conventions at a small meeting. But I don't know when they want to release it. The new 2019 alpha is supposed to have a part of that new system (called something like audio graph) but I had no time looking into the alpha.
     
  21. PixelLifetime

    PixelLifetime

    Joined:
    Mar 30, 2017
    Posts:
    64
    @ReaktorDave It's good to hear! That is the problem as well, it's not official and there is no way to know if they are going to release it after 5 years or the next year, people could spend time learning other tools and be sure that they won't lose any valuable time.

    upload_2018-12-5_6-37-36.png
    Unity roadmap has only those 2 with tags Audio. There is no information about Audio Graph on the internet as well.
     
unityunity