Search Unity

  1. Unity 2020.1 has been released.
    Dismiss Notice
  2. Good news ✨ We have more Unite Now videos available for you to watch on-demand! Come check them out and ask our experts any questions!
    Dismiss Notice

Audio Helm - Native Synthesizer, Sequencer, Sampler [RELEASED]

Discussion in 'Assets and Asset Store' started by mtytel, Oct 26, 2017.

  1. mtytel

    mtytel

    Joined:
    Feb 28, 2014
    Posts:
    84
    Audio Helm is released!
    Anyone want a native synth and sequencer in their game?
    Audio Helm is a live audio synthesizer, sequencer and sampler for Unity. With Audio Helm you can create generative music and musical sound effects for your game.

    Asset Store Page
    https://www.assetstore.unity3d.com/#!/content/86984

    Intro Video:


    Links
    - Manual
    - Standalone Synth Editor (Free/PWYW)
    - Video Tutorials

    Synthesizer
    The synthesizer generates dynamic audio live, no samples or recordings required. It runs as a native plugin to ensure low latency, high performance, mobile ready audio. Download the standalone synth now (free or pay what you want) to browse and create synth patches you can import into your game.

    Sequencer
    The sequencer is a tool for creating musical patterns and rhythms by playing synthesizer or sampler notes over time. You can create your own patterns inside Unity's inspector or create them live from code to generate procedural music.

    Sampler
    The sampler takes an audio sample or recording and can play it back at different speeds to create musical pitches. Using different keyzones you can create a full spectrum piano sampler. Audio Helm comes with 4 drum machines each with a separate sample bank.

    OS Support
    - Windows 7 and higher
    - MacOS 10.7 and higher
    - Linux, e.g. Ubuntu Trusty and higher
    - iOS 8 and higher
    - Android 5.0 (Lollypop) and higher

    One of three video tutorials:


    There is an intro price of $40 ($80 normally)
     
    Last edited: Sep 28, 2018
  2. gilley033

    gilley033

    Joined:
    Jul 10, 2012
    Posts:
    895
    Hi, I just purchased this and I am hearing a popping sound just before the notes play on the Piano Sampler. I am using a Sample Sequencer and just testing out playing a single note. The popping only starts after several loops have run. Currently testing C-1 though I'm pretty sure it does it with any note.

    Also, this is unrelated to your kit, but do you have any experience with visualizing audio information via AudioSource.GetSpectrumData? I want to be able to visualize multiple Audio Sources in the scene (but not all of them, otherwise I'd be able to use AudioListener.GetSpectrumData), so I need a way to combine the spectrum data from multiple Audio Sources. Again, this is unrelated to your product, but I thought I'd ask since you appear to be an audio expert.

    Thanks!
     
  3. gilley033

    gilley033

    Joined:
    Jul 10, 2012
    Posts:
    895
    Okay, I see now that the popping sound is coming from the program stopping one of the sounds before it completely finishes because my "Num Voices" value was not high enough. For instance, if it's set to 2, then on the third loop it will pop as it stops the first loops sound in order to play the third loops sound.

    This may be a dumb question (I have no knowledge about this audio stuff), but would it be possible to reproduce how an actual instrument works when playing a note that is already being played? Like if C-1 key on a piano has been played and is kind of just reverberating (correct term I hope), and then I press it again, the new note will take over the first, since they utilize the same hardware to produce the sound. This seems like it would cut down on the number of "voices" played at once, if possible.
     
  4. mtytel

    mtytel

    Joined:
    Feb 28, 2014
    Posts:
    84
    Like you said Num Voices should fix it. I think there's a bug where the sampler is not listening to Note Off events from the sequencer, so that can leave a lot of trailing voices on (and have multiple of the same note). I'll fix that in the next release.

    As for the spectrum data, you might be able to route certain Audio Sources to an Audio Mixer Group and get the spectrum data there, but I haven't tried this myself.

    If you want to make audio reactive things using Audio Helm, there are note events in the sequencer you can hook into and respond to. This doesn't cover prerecorded music/audio though.
     
  5. gilley033

    gilley033

    Joined:
    Jul 10, 2012
    Posts:
    895
    Thanks for the insight! I did notice the Note Off did not seem to be working right, though I think for my use I will probably not use that option (will have to experiment), so it probably doesn't matter.

    I think I might be not understanding something though. Like I said, I am testing a c1 note, which is 9.752 seconds long according to the piano_upright_c1 clip. I put it as the first and only note in a length 16 (Sixteenth Division) sequencer. From what I can gather, this sequence from beginning to end takes about 2 seconds, so a new note is sounded every 2 seconds. Given that, this is what I would assume is happening:

    Format (Note:StartTime-EndTime)
    1:0-9.752
    2:2-11.752
    3:4-13.752
    4:6-15.752
    5:8-17.752
    6:10-19.752(Note 1 done)
    7:12-21.752(Note 2 done)

    So when note 6 plays, note 1 should be done playing, and then when note 7 plays, note 2 should be done, and so on. This means only 5 notes are playing at once, so a value of 5 for "Num Voices" should be adequate to avoid a note from being cut off.

    Or is the time a single note sounds much longer than 9.752? Thanks again!
     
  6. gilley033

    gilley033

    Joined:
    Jul 10, 2012
    Posts:
    895
    Posting again for some help (If you could look at the previous question that would be great as well). It doesn't appear that I am able to use AudioSource.GetSpectrumData or GetData when using Helm Controller. Is that expected behaviour or a bug? Thanks.
     
  7. gilley033

    gilley033

    Joined:
    Jul 10, 2012
    Posts:
    895
    I'll answer my own question. When using Helm, the sound is effectively generated by an Audio Mixer Group effect, so the data from the Audio Source is as if it were muted, i.e., there is no data from it to process.

    This obviously isn't your problem but I wonder if you know of a way to work around this? A way to read data from the mixer groups, for example. Thanks.
     
  8. gilley033

    gilley033

    Joined:
    Jul 10, 2012
    Posts:
    895
    I just want to clarify something related to what I wrote above, as the answer has far reaching repercussions. As I stated, the audio appears to be generated by the Helm Audio Mixer Group Effect (a native audio plugin, right?). The consequence of this is that the Audio Source outputs no data. In addition to making data analyzing impossible, it also means you cannot use Audio Filters (such as Reverb Zones) with Helm.

    If true, I would say this is fairly limiting attribute of Helm. Can you confirm this, or if I am wrong explain how so?

    I see there there is a script called Helm Audio Receive which utilizes AudioHelm.Native.HelmGetBufferData. Is this a way of retrieving the actual sound generated by Helm? I tried adding it to my game object that has my Audio Source (which is outputted to my mixer), and it seemed to screw things up. I'm not sure what the correct usage is.
     
  9. mtytel

    mtytel

    Joined:
    Feb 28, 2014
    Posts:
    84
    Yes, the synthesizer runs inside that AudioMixerGroup as a native audio plugin so you can't access the Spectrum Data in the Audio Source.

    Audio Reverb Zone works by pre-modifying a recorded audio clip and is not a real time effect so will not work with a real-time instrument like Helm.
    If you'd like reverb as an effect on Helm, you can add a Reverb to the Audio Mixer Group after Helm in the same Audio Mixer Group. You can then modify the parameters (by right clicking on the controls and Exposing them) on this reverb by distance from a zone. This will give you a lot more control than just a pre-edited file that the Reverb Zone gives you. It may take some tweaking to sound good but if you just start with the Dry Level it'll give you a good first pass.

    Helm Audio Receive works by extracting the audio from an Audio Mixer Group back into a *different* Audio Source. It's useful for using third party spatialization or possibly in your case using GetSpectrumData (though I haven't tried this use case).

    For your original question about the note off timing, You *might* still get clicks in that scenario because the Sampler looks ahead a little bit and schedule the next note before it should play. When it schedules the note it silences the voice it will use so you may need one more voice than that.
     
  10. gilley033

    gilley033

    Joined:
    Jul 10, 2012
    Posts:
    895
    All right, so it looks like you can route the audio from the mixer back to an audio source using the Helm Audio Receive script (attached to a game object with an Audio Source component that doesn't have an Audio Clip - it's got to be the first one if you have multiple Audio Filters), however it doesn't appear that any of the other Effects are included in the routed data. This may or not be a good thing for your use case. I tried moving the Helm effect to the end of the chain and it doesn't make a difference.

    However, you can download some sample Native Audio Plugins that add various Effects to your project (from here), one of which is an Effect called Demo Routing. Add this to your Mixer Group and set the Target to whatever channel you are using. Then add a script called Speaker Routing to your game object in the same way you add the Helm Audio Receive script. Make sure the channels all match up. This method does include all the effects on the Mixer Group.

    Note that with either method you are effectively duplicating the sound generated using Helm, so you have to mute it by setting up a Mixer Group with its attenuation all the way down.

    Clearly you can see I'm a novice at this sound thing, so please correct any inaccuracies with what I wrote.
     
  11. gilley033

    gilley033

    Joined:
    Jul 10, 2012
    Posts:
    895
    Thanks for the information (especially the answer to my previous question).

    The spectral analysis is the main thing I am worried about, though I mentioned Reverb Zones since some people may want to use Audio Filters. Using the Effects on the Audio Mixer Groups is probably a better method though.

    One thing I am testing out now is adding a parent to the Audio Mixer Group with the Helm effect and muting it (setting attenuation to -80), routing the child of this parent's audio to a separate Audio Source via DemoRouting effect, and then outputting this sound to a separate Audio Mixer Group that is a direct child to the Master group. Besides the obvious extra processing this involves by the CPU, do you any issues with this route? It should allow for Audio Filters on game objects to be used (though I haven't tested).
     
  12. mtytel

    mtytel

    Joined:
    Feb 28, 2014
    Posts:
    84
    Right, attenuating the volume in the mixer is the best way at the moment to get what you want. It would be best if Unity allowed a native audio plugin to run in an AudioSource but I don't think that's going to happen anytime soon. I

    'm not sure what Audio Filters you're talking about, but as I said the Reverb Zone one is actually pre-processed so doesn't work on live audio. I'd recommend avoiding C# audio filters if that's what you're talking about. Even with extremely simple audio processing I've seen bad audio glitching. It might have gotten better since I last checked though.
     
  13. gilley033

    gilley033

    Joined:
    Jul 10, 2012
    Posts:
    895
    By Audio Filters I mean these. Basically anything with an OnAudioFilterRead method. It is good to know about the filters being glitchy, and I'll take your word regarding the Audio Reverb Zone not working (not a big deal). Thanks!
     
  14. mtytel

    mtytel

    Joined:
    Feb 28, 2014
    Posts:
    84
    Yeah I'd use the ones inside an AudioMixerGroup before the ones that run on the AudioSource.
     
  15. Pointcloud

    Pointcloud

    Joined:
    Nov 24, 2014
    Posts:
    33
    Thanks for this plugin, its great! Quick question - I want to be able to drive the parameters of the synth at runtime without using collisions or a UI, what would be the best way to go about this? Is there anyway to have steady oscillation without having to hit a note? Like, just having an oscillator running at 20 hz that can be manipulated without a key stroke? Or to wait for a note to end before hitting it again in update instead using note length? Here is a script I currently have going, I would like to dynamically update the note being played based on the length of a synth sound, or to directly control an oscillator. Also any recommendations on how to map values from other sources, such as user position or proximity to an audio object?

    public class ObjectAudio : MonoBehaviour {

    public GameObject audioObject;
    public AudioHelm.HelmController helmController;
    public int note = 60;
    public float noteLength = 1.0f;
    public float hitStrength = 1.0f;
    public float subVolume = 1.0f;

    void Update () {

    if (!helmController.IsNoteOn(note)) helmController.NoteOn(note, hitStrength, noteLength);
    helmController.SetParameterPercent(AudioHelm.Param.kSubVolume, subVolume);

    }
    }
     
    Last edited: Jan 1, 2019
  16. mtytel

    mtytel

    Joined:
    Feb 28, 2014
    Posts:
    84
    If you're comfortable using Unity's Animations that would probably be the easiest way.
    If you add a parameter to the HelmController in the inspector you can animate those added sliders.

    If you want to update a single parameter based on user position, you can just pass in a normalized distance into where you're passing subVolume. Is that what you're looking for?

    You might also try programming this change into the patch itself using the Helm standalone engine (tytel.org/helm). You can have an envelope slowly bring up the volume of the sub or have an lfo pulse the volume.
     
  17. tencnivel

    tencnivel

    Joined:
    Sep 26, 2017
    Posts:
    14
    Hey, I have a multi tracks midi file (with drums) that I need to play in my game.
    I have noticed that the plugin cannot take a midi file and just play it (with the association of tracks, channels, instruments), it is more 'low level' which is fine if I find a way to achieve the same.

    I have imported the tracks in different sequencers using the 'Load MIDI File' button (NOTE: the track must be on channel 1, if not they don't appear).

    I now want to output the sequencers to something that would sound like those basic general midi patches. The patches that are given with AudioHelm don't sound like that and I don't want to create my own sample.

    In a nutshell I want to play the sequencer to some basic general midi instruments/patches (including drums)

    What would you recommend?
     
  18. ina

    ina

    Joined:
    Nov 15, 2010
    Posts:
    887
    Hi! I'm trying to understand more about using this for generative music.

    - What's the music theory or method behind having the next generated input sound "good"?

    - Is there a way to do something like directional music that sounds good? i.e. continuous notes based on the angle you are looking at?
     
  19. ina

    ina

    Joined:
    Nov 15, 2010
    Posts:
    887
  20. JakartaIlI

    JakartaIlI

    Joined:
    Feb 6, 2015
    Posts:
    18
    I have question.
    Does it support midi keyboard?
    Different channels on the keyboard?
     
  21. mtytel

    mtytel

    Joined:
    Feb 28, 2014
    Posts:
    84
    If you want a *specific* sound you'll have to generate your own samples.
    There are four different drum kits included but if those don't meet your needs you can just replace whatever samples you want to in the drum kit sampler.

    I'll probably implement multi track MIDI import in the future but I don't have a timeline on that.
     
  22. mtytel

    mtytel

    Joined:
    Feb 28, 2014
    Posts:
    84
    Tough question! I don't think there is a generic answer because it's so subjective.
    I think the easiest place to start when making generative music is random notes on the pentatonic scale. (or just on the black keys). If you just randomly mush the black keys on a keyboard it kind of always sounds nice.

    Don't know what you mean about the directional music.
     
    ina likes this.
  23. mtytel

    mtytel

    Joined:
    Feb 28, 2014
    Posts:
    84
    The synth uses Unity's Native Audio SDK which doesn't support the WebGL builds. There's a list of supported platforms on the store page:
    - Windows 7 and higher
    - Universal Windows Platform ARM/x86/x64
    - MacOS 10.7 and higher
    - Linux, e.g. Ubuntu Trusty and higher
    - iOS 8 and higher
    - Android 5.0 (Lollypop) and higher
     
  24. mtytel

    mtytel

    Joined:
    Feb 28, 2014
    Posts:
    84
    No, there is no MIDI input. But there is an internal sequencer where you can sequence notes and import single track MIDI files and there's code to trigger note on/offs based on events.
     
  25. ina

    ina

    Joined:
    Nov 15, 2010
    Posts:
    887
    Can you explain how to safely turn on a tone and then turn it off by script?
     
  26. yosun

    yosun

    Joined:
    May 18, 2017
    Posts:
    4
    Also curious how to produce a warm string sounding synth?
     
  27. zackrump

    zackrump

    Joined:
    Jan 19, 2014
    Posts:
    9
    Hi- quick question about the outputs. I am building for IOS and Android. I need to spatialize each note that I generate independently-- each note needs its own audiosource. I guess the brute force approach would be to create a prefab with an audiosource and synth or sampler, and create a pool of them. Then choose one from the pool when a new note is needed. If this approach would work, what would realistic limits be in terms of pool size in terms of memory and computation? Maybe there's a different approach to consider?
     
  28. mahmoudsaberamin

    mahmoudsaberamin

    Joined:
    May 25, 2017
    Posts:
    9
    Hi, I donnot know much about audio. So please answer me in a simple way
    My question is I have a Midi file downloaded from the internet, but it is playing the wrong notes because I had to select an audio group with Helm effect which has different notes
    How can I use the original Midi file notes
     
  29. mahmoudsaberamin

    mahmoudsaberamin

    Joined:
    May 25, 2017
    Posts:
    9
  30. mahmoudsaberamin

    mahmoudsaberamin

    Joined:
    May 25, 2017
    Posts:
    9
    Alright here is what I did
    I commented this line in audio sequencer
    Native.EnableSequencer(reference, true);
    I loaded the midi file in helm sequencer in the inspector
    In the Note On event I spawn gameobjects
    The game objects move with the same speed
    When they collide I call a method
    which has the following impllementation

    Code (CSharp):
    1. public void ExecuteNoteOn(Note note)
    2.     {
    3.         _controllerHelm.NoteOn(note.note, note.velocity, (note.end - note.start) * 16 / AudioHelmClock.GetGlobalBpm());
    4.     }
    The output audio is distorted
    I choosed Keys/ Piano4
    I am trying to make a game that the user hits piano objects and sound gets generated following the music notes
     
  31. mtytel

    mtytel

    Joined:
    Feb 28, 2014
    Posts:
    84
    Midi doesn't contain any sound data. It's only the notes that should play on an instrument.
    So if you load a MIDI file into Audio Helm that was playing a different instrument, it won't sound like the original.
    If you follow the native synth tutorial you can select from a bunch of patches to get a sound you want:


    You should be able to load the MIDI file into the sequencer if it is a *single track*. If it's not a single track you'll have to edit it in a program like Reaper, Ableton Live, etc.
     
  32. pleasantPretzel

    pleasantPretzel

    Joined:
    Jun 12, 2013
    Posts:
    29
    EDIT: Deleting my message. I had said previously that I discovered crackling/pops in my Unity APKs on Android 9 Pie with projects that include Audio Helm. Turns out other projects without Helm are crackling too. I even noticed games like Unity made _Prism and Crossy Road now have the same intermittent popping sounds on Android 9 (on a Samsung Note8), when they previously had no detectable audio issues on earlier Android versions. So... the crackling is certainly beyond Audio Helm - sorry for jumping the gun on that one!

    I will find a more suitable place to continue the conversation! Carry on :)
     
    Last edited: Apr 17, 2019
  33. mtytel

    mtytel

    Joined:
    Feb 28, 2014
    Posts:
    84
    No problem, thanks for letting me know!
     
  34. Lelon

    Lelon

    Joined:
    May 24, 2015
    Posts:
    76
    Amazing plug-in, I have one question tho, I'm trying to get the piano to sound like an actual real piano, is that possible? Thank you.
     
  35. mtytel

    mtytel

    Joined:
    Feb 28, 2014
    Posts:
    84
    There is a piano sampler prefab in the prefabs directory.

    But this is just a basic multi sampler with samples spread out on octaves. Making a *very* accurate sounding piano would require a lot more samples (including velocity adjusted versions) and I don't have any plans to do that.
     
  36. ina

    ina

    Joined:
    Nov 15, 2010
    Posts:
    887
    bummer was hoping for a piano synth :(
     
  37. hugodigio

    hugodigio

    Joined:
    Jun 24, 2019
    Posts:
    3
    Hello,

    I work for a client who is not familiar with Unity, to make an application with a virtual MIDI keyboard (on a mobile touch screen). This client wants to add music to his application and change the sound of the keyboard to match the music and reproduce the original instrument.

    Is it possible to synth a full keyboard from only a few identified samples with this plugin in runtime ?

    Best regards,
     
unityunity