Search Unity

  1. Welcome to the Unity Forums! Please take the time to read our Code of Conduct to familiarize yourself with the forum rules and how to post constructively.
  2. Dismiss Notice

Audio Helm - Native Synthesizer, Sequencer, Sampler [RELEASED]

Discussion in 'Assets and Asset Store' started by mtytel, Oct 26, 2017.

  1. mtytel

    mtytel

    Joined:
    Feb 28, 2014
    Posts:
    84
    Audio Helm is released!
    Anyone want a native synth and sequencer in their game?
    Audio Helm is a live audio synthesizer, sequencer and sampler for Unity. With Audio Helm you can create generative music and musical sound effects for your game.

    Asset Store Page
    https://www.assetstore.unity3d.com/#!/content/86984

    Intro Video:


    Links
    - Manual
    - Standalone Synth Editor (Free/PWYW)
    - Video Tutorials

    Synthesizer
    The synthesizer generates dynamic audio live, no samples or recordings required. It runs as a native plugin to ensure low latency, high performance, mobile ready audio. Download the standalone synth now (free or pay what you want) to browse and create synth patches you can import into your game.

    Sequencer
    The sequencer is a tool for creating musical patterns and rhythms by playing synthesizer or sampler notes over time. You can create your own patterns inside Unity's inspector or create them live from code to generate procedural music.

    Sampler
    The sampler takes an audio sample or recording and can play it back at different speeds to create musical pitches. Using different keyzones you can create a full spectrum piano sampler. Audio Helm comes with 4 drum machines each with a separate sample bank.

    OS Support
    - Windows 7 and higher
    - MacOS 10.7 and higher
    - Linux, e.g. Ubuntu Trusty and higher
    - iOS 8 and higher
    - Android 5.0 (Lollypop) and higher

    One of three video tutorials:


    There is an intro price of $40 ($80 normally)
     
    Last edited: Sep 28, 2018
  2. gilley033

    gilley033

    Joined:
    Jul 10, 2012
    Posts:
    1,151
    Hi, I just purchased this and I am hearing a popping sound just before the notes play on the Piano Sampler. I am using a Sample Sequencer and just testing out playing a single note. The popping only starts after several loops have run. Currently testing C-1 though I'm pretty sure it does it with any note.

    Also, this is unrelated to your kit, but do you have any experience with visualizing audio information via AudioSource.GetSpectrumData? I want to be able to visualize multiple Audio Sources in the scene (but not all of them, otherwise I'd be able to use AudioListener.GetSpectrumData), so I need a way to combine the spectrum data from multiple Audio Sources. Again, this is unrelated to your product, but I thought I'd ask since you appear to be an audio expert.

    Thanks!
     
  3. gilley033

    gilley033

    Joined:
    Jul 10, 2012
    Posts:
    1,151
    Okay, I see now that the popping sound is coming from the program stopping one of the sounds before it completely finishes because my "Num Voices" value was not high enough. For instance, if it's set to 2, then on the third loop it will pop as it stops the first loops sound in order to play the third loops sound.

    This may be a dumb question (I have no knowledge about this audio stuff), but would it be possible to reproduce how an actual instrument works when playing a note that is already being played? Like if C-1 key on a piano has been played and is kind of just reverberating (correct term I hope), and then I press it again, the new note will take over the first, since they utilize the same hardware to produce the sound. This seems like it would cut down on the number of "voices" played at once, if possible.
     
  4. mtytel

    mtytel

    Joined:
    Feb 28, 2014
    Posts:
    84
    Like you said Num Voices should fix it. I think there's a bug where the sampler is not listening to Note Off events from the sequencer, so that can leave a lot of trailing voices on (and have multiple of the same note). I'll fix that in the next release.

    As for the spectrum data, you might be able to route certain Audio Sources to an Audio Mixer Group and get the spectrum data there, but I haven't tried this myself.

    If you want to make audio reactive things using Audio Helm, there are note events in the sequencer you can hook into and respond to. This doesn't cover prerecorded music/audio though.
     
  5. gilley033

    gilley033

    Joined:
    Jul 10, 2012
    Posts:
    1,151
    Thanks for the insight! I did notice the Note Off did not seem to be working right, though I think for my use I will probably not use that option (will have to experiment), so it probably doesn't matter.

    I think I might be not understanding something though. Like I said, I am testing a c1 note, which is 9.752 seconds long according to the piano_upright_c1 clip. I put it as the first and only note in a length 16 (Sixteenth Division) sequencer. From what I can gather, this sequence from beginning to end takes about 2 seconds, so a new note is sounded every 2 seconds. Given that, this is what I would assume is happening:

    Format (Note:StartTime-EndTime)
    1:0-9.752
    2:2-11.752
    3:4-13.752
    4:6-15.752
    5:8-17.752
    6:10-19.752(Note 1 done)
    7:12-21.752(Note 2 done)

    So when note 6 plays, note 1 should be done playing, and then when note 7 plays, note 2 should be done, and so on. This means only 5 notes are playing at once, so a value of 5 for "Num Voices" should be adequate to avoid a note from being cut off.

    Or is the time a single note sounds much longer than 9.752? Thanks again!
     
  6. gilley033

    gilley033

    Joined:
    Jul 10, 2012
    Posts:
    1,151
    Posting again for some help (If you could look at the previous question that would be great as well). It doesn't appear that I am able to use AudioSource.GetSpectrumData or GetData when using Helm Controller. Is that expected behaviour or a bug? Thanks.
     
  7. gilley033

    gilley033

    Joined:
    Jul 10, 2012
    Posts:
    1,151
    I'll answer my own question. When using Helm, the sound is effectively generated by an Audio Mixer Group effect, so the data from the Audio Source is as if it were muted, i.e., there is no data from it to process.

    This obviously isn't your problem but I wonder if you know of a way to work around this? A way to read data from the mixer groups, for example. Thanks.
     
  8. gilley033

    gilley033

    Joined:
    Jul 10, 2012
    Posts:
    1,151
    I just want to clarify something related to what I wrote above, as the answer has far reaching repercussions. As I stated, the audio appears to be generated by the Helm Audio Mixer Group Effect (a native audio plugin, right?). The consequence of this is that the Audio Source outputs no data. In addition to making data analyzing impossible, it also means you cannot use Audio Filters (such as Reverb Zones) with Helm.

    If true, I would say this is fairly limiting attribute of Helm. Can you confirm this, or if I am wrong explain how so?

    I see there there is a script called Helm Audio Receive which utilizes AudioHelm.Native.HelmGetBufferData. Is this a way of retrieving the actual sound generated by Helm? I tried adding it to my game object that has my Audio Source (which is outputted to my mixer), and it seemed to screw things up. I'm not sure what the correct usage is.
     
  9. mtytel

    mtytel

    Joined:
    Feb 28, 2014
    Posts:
    84
    Yes, the synthesizer runs inside that AudioMixerGroup as a native audio plugin so you can't access the Spectrum Data in the Audio Source.

    Audio Reverb Zone works by pre-modifying a recorded audio clip and is not a real time effect so will not work with a real-time instrument like Helm.
    If you'd like reverb as an effect on Helm, you can add a Reverb to the Audio Mixer Group after Helm in the same Audio Mixer Group. You can then modify the parameters (by right clicking on the controls and Exposing them) on this reverb by distance from a zone. This will give you a lot more control than just a pre-edited file that the Reverb Zone gives you. It may take some tweaking to sound good but if you just start with the Dry Level it'll give you a good first pass.

    Helm Audio Receive works by extracting the audio from an Audio Mixer Group back into a *different* Audio Source. It's useful for using third party spatialization or possibly in your case using GetSpectrumData (though I haven't tried this use case).

    For your original question about the note off timing, You *might* still get clicks in that scenario because the Sampler looks ahead a little bit and schedule the next note before it should play. When it schedules the note it silences the voice it will use so you may need one more voice than that.
     
  10. gilley033

    gilley033

    Joined:
    Jul 10, 2012
    Posts:
    1,151
    All right, so it looks like you can route the audio from the mixer back to an audio source using the Helm Audio Receive script (attached to a game object with an Audio Source component that doesn't have an Audio Clip - it's got to be the first one if you have multiple Audio Filters), however it doesn't appear that any of the other Effects are included in the routed data. This may or not be a good thing for your use case. I tried moving the Helm effect to the end of the chain and it doesn't make a difference.

    However, you can download some sample Native Audio Plugins that add various Effects to your project (from here), one of which is an Effect called Demo Routing. Add this to your Mixer Group and set the Target to whatever channel you are using. Then add a script called Speaker Routing to your game object in the same way you add the Helm Audio Receive script. Make sure the channels all match up. This method does include all the effects on the Mixer Group.

    Note that with either method you are effectively duplicating the sound generated using Helm, so you have to mute it by setting up a Mixer Group with its attenuation all the way down.

    Clearly you can see I'm a novice at this sound thing, so please correct any inaccuracies with what I wrote.
     
  11. gilley033

    gilley033

    Joined:
    Jul 10, 2012
    Posts:
    1,151
    Thanks for the information (especially the answer to my previous question).

    The spectral analysis is the main thing I am worried about, though I mentioned Reverb Zones since some people may want to use Audio Filters. Using the Effects on the Audio Mixer Groups is probably a better method though.

    One thing I am testing out now is adding a parent to the Audio Mixer Group with the Helm effect and muting it (setting attenuation to -80), routing the child of this parent's audio to a separate Audio Source via DemoRouting effect, and then outputting this sound to a separate Audio Mixer Group that is a direct child to the Master group. Besides the obvious extra processing this involves by the CPU, do you any issues with this route? It should allow for Audio Filters on game objects to be used (though I haven't tested).
     
  12. mtytel

    mtytel

    Joined:
    Feb 28, 2014
    Posts:
    84
    Right, attenuating the volume in the mixer is the best way at the moment to get what you want. It would be best if Unity allowed a native audio plugin to run in an AudioSource but I don't think that's going to happen anytime soon. I

    'm not sure what Audio Filters you're talking about, but as I said the Reverb Zone one is actually pre-processed so doesn't work on live audio. I'd recommend avoiding C# audio filters if that's what you're talking about. Even with extremely simple audio processing I've seen bad audio glitching. It might have gotten better since I last checked though.
     
  13. gilley033

    gilley033

    Joined:
    Jul 10, 2012
    Posts:
    1,151
    By Audio Filters I mean these. Basically anything with an OnAudioFilterRead method. It is good to know about the filters being glitchy, and I'll take your word regarding the Audio Reverb Zone not working (not a big deal). Thanks!
     
  14. mtytel

    mtytel

    Joined:
    Feb 28, 2014
    Posts:
    84
    Yeah I'd use the ones inside an AudioMixerGroup before the ones that run on the AudioSource.
     
  15. Pointcloud

    Pointcloud

    Joined:
    Nov 24, 2014
    Posts:
    37
    Thanks for this plugin, its great! Quick question - I want to be able to drive the parameters of the synth at runtime without using collisions or a UI, what would be the best way to go about this? Is there anyway to have steady oscillation without having to hit a note? Like, just having an oscillator running at 20 hz that can be manipulated without a key stroke? Or to wait for a note to end before hitting it again in update instead using note length? Here is a script I currently have going, I would like to dynamically update the note being played based on the length of a synth sound, or to directly control an oscillator. Also any recommendations on how to map values from other sources, such as user position or proximity to an audio object?

    public class ObjectAudio : MonoBehaviour {

    public GameObject audioObject;
    public AudioHelm.HelmController helmController;
    public int note = 60;
    public float noteLength = 1.0f;
    public float hitStrength = 1.0f;
    public float subVolume = 1.0f;

    void Update () {

    if (!helmController.IsNoteOn(note)) helmController.NoteOn(note, hitStrength, noteLength);
    helmController.SetParameterPercent(AudioHelm.Param.kSubVolume, subVolume);

    }
    }
     
    Last edited: Jan 1, 2019
  16. mtytel

    mtytel

    Joined:
    Feb 28, 2014
    Posts:
    84
    If you're comfortable using Unity's Animations that would probably be the easiest way.
    If you add a parameter to the HelmController in the inspector you can animate those added sliders.

    If you want to update a single parameter based on user position, you can just pass in a normalized distance into where you're passing subVolume. Is that what you're looking for?

    You might also try programming this change into the patch itself using the Helm standalone engine (tytel.org/helm). You can have an envelope slowly bring up the volume of the sub or have an lfo pulse the volume.
     
  17. tencnivel

    tencnivel

    Joined:
    Sep 26, 2017
    Posts:
    39
    Hey, I have a multi tracks midi file (with drums) that I need to play in my game.
    I have noticed that the plugin cannot take a midi file and just play it (with the association of tracks, channels, instruments), it is more 'low level' which is fine if I find a way to achieve the same.

    I have imported the tracks in different sequencers using the 'Load MIDI File' button (NOTE: the track must be on channel 1, if not they don't appear).

    I now want to output the sequencers to something that would sound like those basic general midi patches. The patches that are given with AudioHelm don't sound like that and I don't want to create my own sample.

    In a nutshell I want to play the sequencer to some basic general midi instruments/patches (including drums)

    What would you recommend?
     
  18. ina

    ina

    Joined:
    Nov 15, 2010
    Posts:
    1,058
    Hi! I'm trying to understand more about using this for generative music.

    - What's the music theory or method behind having the next generated input sound "good"?

    - Is there a way to do something like directional music that sounds good? i.e. continuous notes based on the angle you are looking at?
     
  19. ina

    ina

    Joined:
    Nov 15, 2010
    Posts:
    1,058
  20. JakartaIlI

    JakartaIlI

    Joined:
    Feb 6, 2015
    Posts:
    28
    I have question.
    Does it support midi keyboard?
    Different channels on the keyboard?
     
  21. mtytel

    mtytel

    Joined:
    Feb 28, 2014
    Posts:
    84
    If you want a *specific* sound you'll have to generate your own samples.
    There are four different drum kits included but if those don't meet your needs you can just replace whatever samples you want to in the drum kit sampler.

    I'll probably implement multi track MIDI import in the future but I don't have a timeline on that.
     
  22. mtytel

    mtytel

    Joined:
    Feb 28, 2014
    Posts:
    84
    Tough question! I don't think there is a generic answer because it's so subjective.
    I think the easiest place to start when making generative music is random notes on the pentatonic scale. (or just on the black keys). If you just randomly mush the black keys on a keyboard it kind of always sounds nice.

    Don't know what you mean about the directional music.
     
    ina likes this.
  23. mtytel

    mtytel

    Joined:
    Feb 28, 2014
    Posts:
    84
    The synth uses Unity's Native Audio SDK which doesn't support the WebGL builds. There's a list of supported platforms on the store page:
    - Windows 7 and higher
    - Universal Windows Platform ARM/x86/x64
    - MacOS 10.7 and higher
    - Linux, e.g. Ubuntu Trusty and higher
    - iOS 8 and higher
    - Android 5.0 (Lollypop) and higher
     
  24. mtytel

    mtytel

    Joined:
    Feb 28, 2014
    Posts:
    84
    No, there is no MIDI input. But there is an internal sequencer where you can sequence notes and import single track MIDI files and there's code to trigger note on/offs based on events.
     
  25. ina

    ina

    Joined:
    Nov 15, 2010
    Posts:
    1,058
    Can you explain how to safely turn on a tone and then turn it off by script?
     
  26. yosun

    yosun

    Joined:
    May 18, 2017
    Posts:
    5
    Also curious how to produce a warm string sounding synth?
     
  27. zackrump

    zackrump

    Joined:
    Jan 19, 2014
    Posts:
    9
    Hi- quick question about the outputs. I am building for IOS and Android. I need to spatialize each note that I generate independently-- each note needs its own audiosource. I guess the brute force approach would be to create a prefab with an audiosource and synth or sampler, and create a pool of them. Then choose one from the pool when a new note is needed. If this approach would work, what would realistic limits be in terms of pool size in terms of memory and computation? Maybe there's a different approach to consider?
     
  28. mahmoudsaberamin

    mahmoudsaberamin

    Joined:
    May 25, 2017
    Posts:
    11
    Hi, I donnot know much about audio. So please answer me in a simple way
    My question is I have a Midi file downloaded from the internet, but it is playing the wrong notes because I had to select an audio group with Helm effect which has different notes
    How can I use the original Midi file notes
     
  29. mahmoudsaberamin

    mahmoudsaberamin

    Joined:
    May 25, 2017
    Posts:
    11
  30. mahmoudsaberamin

    mahmoudsaberamin

    Joined:
    May 25, 2017
    Posts:
    11
    Alright here is what I did
    I commented this line in audio sequencer
    Native.EnableSequencer(reference, true);
    I loaded the midi file in helm sequencer in the inspector
    In the Note On event I spawn gameobjects
    The game objects move with the same speed
    When they collide I call a method
    which has the following impllementation

    Code (CSharp):
    1. public void ExecuteNoteOn(Note note)
    2.     {
    3.         _controllerHelm.NoteOn(note.note, note.velocity, (note.end - note.start) * 16 / AudioHelmClock.GetGlobalBpm());
    4.     }
    The output audio is distorted
    I choosed Keys/ Piano4
    I am trying to make a game that the user hits piano objects and sound gets generated following the music notes
     
  31. mtytel

    mtytel

    Joined:
    Feb 28, 2014
    Posts:
    84
    Midi doesn't contain any sound data. It's only the notes that should play on an instrument.
    So if you load a MIDI file into Audio Helm that was playing a different instrument, it won't sound like the original.
    If you follow the native synth tutorial you can select from a bunch of patches to get a sound you want:


    You should be able to load the MIDI file into the sequencer if it is a *single track*. If it's not a single track you'll have to edit it in a program like Reaper, Ableton Live, etc.
     
  32. pleasantPretzel

    pleasantPretzel

    Joined:
    Jun 12, 2013
    Posts:
    34
    EDIT: Deleting my message. I had said previously that I discovered crackling/pops in my Unity APKs on Android 9 Pie with projects that include Audio Helm. Turns out other projects without Helm are crackling too. I even noticed games like Unity made _Prism and Crossy Road now have the same intermittent popping sounds on Android 9 (on a Samsung Note8), when they previously had no detectable audio issues on earlier Android versions. So... the crackling is certainly beyond Audio Helm - sorry for jumping the gun on that one!

    I will find a more suitable place to continue the conversation! Carry on :)
     
    Last edited: Apr 17, 2019
  33. mtytel

    mtytel

    Joined:
    Feb 28, 2014
    Posts:
    84
    No problem, thanks for letting me know!
     
  34. Lelon

    Lelon

    Joined:
    May 24, 2015
    Posts:
    79
    Amazing plug-in, I have one question tho, I'm trying to get the piano to sound like an actual real piano, is that possible? Thank you.
     
  35. mtytel

    mtytel

    Joined:
    Feb 28, 2014
    Posts:
    84
    There is a piano sampler prefab in the prefabs directory.

    But this is just a basic multi sampler with samples spread out on octaves. Making a *very* accurate sounding piano would require a lot more samples (including velocity adjusted versions) and I don't have any plans to do that.
     
  36. ina

    ina

    Joined:
    Nov 15, 2010
    Posts:
    1,058
    bummer was hoping for a piano synth :(
     
  37. hugodigio

    hugodigio

    Joined:
    Jun 24, 2019
    Posts:
    3
    Hello,

    I work for a client who is not familiar with Unity, to make an application with a virtual MIDI keyboard (on a mobile touch screen). This client wants to add music to his application and change the sound of the keyboard to match the music and reproduce the original instrument.

    Is it possible to synth a full keyboard from only a few identified samples with this plugin in runtime ?

    Best regards,
     
  38. yendou

    yendou

    Joined:
    Jan 17, 2017
    Posts:
    12
    Hello,is it possible to record loops with the sequencer?I send notes when pressing keys,now im thinking how to correctly input them in the sequencer to record a loop.I would gladly pay for this function
    Best regards and keep up the great work
     
  39. Voronoi

    Voronoi

    Joined:
    Jul 2, 2012
    Posts:
    571
    I think the best approach is to use a DAW like Logic to record your MIDI tracks. Helm shows up as a snythesizer in Logic, so I can tweak the synth as well as record my MIDI notes. Save the tweaked synth as your own Helm patch and export the midi files. The demo shows how to load patches and notes.
     
  40. Voronoi

    Voronoi

    Joined:
    Jul 2, 2012
    Posts:
    571
    @mtytel Is there a method to add a random chance of a note player per midi sequencer? I'd like to try the technique Brian Eno's describes where I can assign something like a 14% probably that any given note will be played on a midi track. That way there is some randomness, but still an overall structure to the music.
     
  41. marrobin

    marrobin

    Joined:
    Oct 1, 2020
    Posts:
    7
    One way to do this is to pre-load the sequencer with a midi file containing all the notes, then iterate the notes and for each note, randomly drop its velocity to zero or delete the note altogether.

    Code (CSharp):
    1.             for (int i = 0; i < sequencer.allNotes.Length; ++i)
    2.             {
    3.                 foreach (Note note in sequencer.allNotes[i].notes)
    4.                     note.velocity = Random.Range(0f, 1f) > 0.14f ? 1f : 0f;
    5.             }
     
  42. marrobin

    marrobin

    Joined:
    Oct 1, 2020
    Posts:
    7
    Hi @mtytel and friends!
    I'm trying to programmatically move the playhead position to arbitrary points on the timeline. I've done this by adding this method to the AudioHelmClock class:

    Code (CSharp):
    1.         /// <summary>
    2.         /// Seek to the specified beat.
    3.         /// </summary>
    4.         /// <param name="newGlobalBeatTime">The beat value to seek to.</param>
    5.         public void SeekBeat(double newGlobalBeatTime)
    6.         {          
    7.             double time = AudioSettings.dspTime;
    8.             lastSampledTime = time;
    9.  
    10.             globalBeatTime = newGlobalBeatTime;  
    11.             Native.SetBeatTime(globalBeatTime);
    12.         }  
    This works correctly, but with an odd side effect. After seeking to a new beat position, the Native sequencer iterates through every note in between the current beat position and the new position and fires a NoteOn, then NoteOff for each note, creating an instantaneous loud burst of sound. Trying to hack around this by calling AllNotesOff isn't viable because it also turns off the notes at the new playhead position.

    How can I seek to a new beat position and have the Native Sequencer/Synth skip over all of the intermediary notes?
     
  43. Voronoi

    Voronoi

    Joined:
    Jul 2, 2012
    Posts:
    571
    Yeah, I ended up doing something similar. I used the Note Off Event, but I think I could also have done this per beat, but this gave the effect I was after. After every Note Off in the Helm Sequencer, I set the volume for the next note randomly:

    Code (CSharp):
    1.      
    2. public void Velocity(Note note)
    3.         {
    4.             float rand = Random.Range(0.0f, 1.0f);
    5.             if (rand < percentOddsOfPlaying * .01f)
    6.                 note.velocity = 1f;
    7.             else
    8.                 note.velocity = 0.01f;
    9.         }
    I can set percentOddsOfPlaying to 14 and it should be a 14% odds of playing that particular note.
     
  44. marrobin

    marrobin

    Joined:
    Oct 1, 2020
    Posts:
    7
    @mtytel - can you please help with my question above?
     
  45. lazer

    lazer

    Joined:
    Aug 15, 2012
    Posts:
    20
    Hi @mtytel -

    First off, I'm really excited to be using Helm in my project. I'm using it for a dynamic soundscape in neurofeedback application. Today though I ran across a crash on patch loading which I cant figure out. It appears to have been brought on by trying to load the "MT Talk Radio" patch. My scene has 4 synth instances, 2 with MT Talk Radio and 2 with MT Fluffy Landscapes, which was working previously with all four instances loading MT Fluffy Landscapes.

    Heres the error.log unity generates when it crashes.

    Code (CSharp):
    1. Unity Editor by Unity Technologies [version: Unity 2019.4.20f1_6dd1c08eedfa]
    2.  
    3.  
    4. AudioPluginHelm.dll caused an Access Violation (0xc0000005)
    5. in module AudioPluginHelm.dll at 0033:df1234bc.
    6.  
    7.  
    8.  
    9. Error occurred at 2021-03-28_203920.
    10. C:\Program Files\Unity\Hub\Editor\2019.4.20f1\Editor\Unity.exe, run by daniel.
    11.  
    12.  
    13. 62% physical memory in use.
    14. 16112 MB physical memory [6094 MB free].
    15. 1551 MB process peak paging file [1359 MB used].
    16. 1019 MB process peak working set [820 MB used].
    17. System Commit Total/Limit/Peak: 13627MB/39664MB/14038MB
    18. System Physical Total/Available: 16112MB/6094MB
    19. System Process Count: 238
    20. System Thread Count: 2912
    21. System Handle Count: 101768
    22.  
    23. Disk space data for 'C:\Users\daniel\AppData\Local\Temp\Unity\Editor\Crashes\Crash_2021-03-29_033915314\': 788893990912 bytes free of 1003008028672 total.
    24.  
    25. Read from location 0000000000000010 caused an access violation.
    26.  
    27. Context:
    28. RDI: 0x0000000000000000 RSI: 0x000001e14a0b5618 RAX: 0x0000000000000000
    29. RBX: 0x000001e1aca22250 RCX: 0x0000004634d8dd40 RDX: 0x000000000000000f
    30. RIP: 0x00007ff8df1234bc RBP: 0x000001e14694b630 SegCs: 0x0000000000000033
    31. EFlags: 0x0000000000010206 RSP: 0x0000004634d8dd70 SegSs: 0x000000000000002b
    32. R8: 0x000001e21525a1c0 R9: 0x0000000000000000 R10: 0x000001e218df1e50
    33. R11: 0x000001e14694b650 R12: 0x000001e218df1e50 R13: 0x0000000000000004
    34. R14: 0x000001e14694b630 R15: 0x000001e218df1f50
    35.  
    36. Bytes at CS:EIP:
    37. 48 8b 48 10 4c 8b 01 41 ff 50 38 48 8d 55 20 0f
    38.  
    39. Mono DLL loaded successfully at 'C:\Program Files\Unity\Hub\Editor\2019.4.20f1\Editor\Data\MonoBleedingEdge\EmbedRuntime\mono-2.0-bdwgc.dll'.
    40.  
    41.  
    42. Stack Trace of Crashed Thread 8072:
    43. 0x00007FF8DF1234BC (AudioPluginHelm) UnityGetAudioEffectDefinitions
    44. 0x00007FF8DF1393CA (AudioPluginHelm) HelmAddModulation
    45. 0x000001E20C10DC9D (Assembly-CSharp) AudioHelm.Native.HelmAddModulation()
    46. 0x000001E20C10CEEB (Assembly-CSharp) AudioHelm.HelmController.LoadPatch()
    47. 0x000001E20C0EE1CB (Assembly-CSharp) <hack_force_delayed_load_patch>d__12.MoveNext()
    48. 0x000001E1B19208A1 (UnityEngine.CoreModule) UnityEngine.SetupCoroutine.InvokeMoveNext()
    49. 0x000001E1B1920A90 (UnityEngine.CoreModule) <Module>.runtime_invoke_void_object_intptr()
    50. 0x00007FF8DE52DAD0 (mono-2.0-bdwgc) mono_get_runtime_build_info
    51. 0x00007FF8DE4B2932 (mono-2.0-bdwgc) mono_perfcounters_init
    52. 0x00007FF8DE4BB98F (mono-2.0-bdwgc) mono_runtime_invoke
    53. ERROR: SymGetSymFromAddr64, GetLastError: 'Attempt to access invalid address.' (Address: 00007FF7AC84945E)
    54.  
    55. 0x00007FF7AC84945E (Unity) (function-name not available)


    Scratching my head here...hope you can help!

    Update: I can successfully load multiple patches in to multiple synths through the Audio Mixer interface, the issue is triggered when I load a patch at runtime, and only in to certain synth instances.


    Thanks,
    Daniel
     
    Last edited: Mar 30, 2021
  46. joelognn

    joelognn

    Joined:
    Aug 8, 2018
    Posts:
    9
    Hello all, I wondered if anyone could point me in the right direction here, I cannot find what I am looking for in the documentation and I am new to AudioHelm.

    Is there a simple way to load a patch into the HelmController component? Ideally create an array of HelmPatch objects and load them in that way. I have seen how this is done in the HelmPatchUI component, but is there any easier way than loading the file in from a path? For example:

    Code (CSharp):
    1.  
    2. public HelmPatch[] patches;
    3. controller.LoadPatch(patches[0])
    4.  
    Obviously this doesn't work as intended, as you cannot allocate any of the preset patches that way.

    Would appreciate some guidance.
     
    zacharyaghaizu likes this.
  47. MattRix

    MattRix

    Joined:
    Aug 23, 2011
    Posts:
    121
    @joelognn It's not exactly what you want, but you can add a bunch of HelmPatch components to the current gameobject and then get all of the HelmPatches that are attached to the current component. The big problem is that the HelmPatchUI actually does the loading of the data in the editor. The only other simple way I can see to bypass that in the editor would be to rename your .helm files to .txt, then load their data as TextAssets... or you could probably keep them as .helm but treat them like regular data files instead of assets.
     
  48. MattRix

    MattRix

    Joined:
    Aug 23, 2011
    Posts:
    121
    I imagine Matt Tytel is busy with his new synth Vital (which is awesome!) so I'll throw in a handy discovery I made:

    If anyone else is running into an issue where the audio plugin GUI for Helm is insanely slow (taking ~80ms per frame on my computer), I tracked it down to GetFullPatchesPath() in PatchBrowserUI.cs

    There's a bunch of code that is trying to find your current presets folder but it's iterating over all the directories in your project to find it... and it's doing this twice per frame (on repaint and layout). As long as AudioHelm is in the default folder, you can comment all that code out (and if not, just enter your hardcoded location for it here instead!)

    upload_2021-5-12_21-51-46.png
     
  49. zacharyaghaizu

    zacharyaghaizu

    Joined:
    Aug 14, 2020
    Posts:
    65
    Love for this!!! I was really worried and even started clearing files on my computer, thinking it was my system.
     
  50. FMmutingMode

    FMmutingMode

    Joined:
    Feb 4, 2019
    Posts:
    5
    Just started using Audio Helm and it works great on Unity. But when I build an Android version the helm audio source 'blips' and then stops playing on level load. The other audio sources in the scene work. Anyone encounter something like this?
     
    zacharyaghaizu likes this.