Search Unity

[RELEASED] G-Audio : 2D Audio Framework

Discussion in 'Assets and Asset Store' started by gregzo, Jan 20, 2014.

  1. CinnoMan

    CinnoMan

    Joined:
    Jan 27, 2014
    Posts:
    16
    Hi!
    I have a question regarding the pulse modules driving non-audio events: Do you see a way to get a timing precision that matches that of PlayScheduled()? As far as I know, coroutines don't update more frequently than, e.g., Update() and I'm measuring drift using it, as expected. It's a variable drift, not a steady latency, so increasing latency doesn't solve the problem.

    my current Pulse Client (just for testing purposes)

    Code (csharp):
    1.  
    2.     public class PulseClientTest : AGATImpulseClient, IGATImpulseClient  {
    3.      
    4.         double pulseDspTime;
    5.         public double myLatency = 0.01d;
    6.      
    7.        override public void OnImpulse(  IGATPulseInfo pulseInfo ){
    8.             Debug.Log ("Pulse happened: "+AudioSettings.dspTime);
    9.             pulseDspTime = pulseInfo.PulseDspTime;
    10.             StartCoroutine(MyCoroutine ());
    11.         }
    12.      
    13.         IEnumerator MyCoroutine(){
    14.             while( AudioSettings.dspTime < pulseDspTime -myLatency ){
    15.                 yield return null;      //waiting...
    16.             }
    17.             Debug.Log ("pulseDspTime= " + pulseDspTime + " dspTime= " +AudioSettings.dspTime +
    18.                        "drift= " +(AudioSettings.dspTime -pulseDspTime));  
    19.             yield return null;
    20.         }
    The output of this shows a variable drift of up to 0.027 (seconds?!) even with a very simple scene with not much going on. I'm a little worried what will happen if I have actual gameplay and graphics. Is there a way to reduce drift further by somehow scheduling non-audio events more precisely, or is this as precise as it can get for non-audio?

    Thanks for your thoughts, cheers! :)
     
  2. gregzo

    gregzo

    Joined:
    Dec 17, 2011
    Posts:
    795
    Hi CinnoMan,

    I'm afraid you cannot have more precise timing of non-audio events than your frame rate.

    What you can do is fire things in advance, which you'll need to anyway for any kind of "pulsing" of objects.

    Here's a simpler, and more precise way of doing things:

    Code (csharp):
    1. override public void OnImpulse(  IGATPulseInfo pulseInfo )
    2. {
    3.             double deltaDspTime = pulseInfo.pulseDspTime - AudioSettings.dspTime;
    4.             float eventTimeOnMainThread = Time.time + ( float )deltaDspTime;
    5.             StartCoroutine( MyCoroutine( eventTimeOnMainThread ) );
    6. }
    7.  
    8. IEnumerator MyCoroutine( float eventTime )
    9. {
    10.     while( Time.time < eventTime - myLatency )
    11.     {
    12.           yield return null; // or animate towards eventTime
    13.     }
    14.     //There is your synched method call.
    15. }
    Bear in mind that you cannot wait for a precise amount of time in a Coroutine. You can yield - i.e. wait for the next frame. If the next frame's too late, too bad!

    I hope it helps,

    Gregzo
     
  3. CinnoMan

    CinnoMan

    Joined:
    Jan 27, 2014
    Posts:
    16
    hi Gregzo

    Thanks for your reply. Ok, that's what I figured. I was hoping maybe there'd a way of getting around the unsteady update rate using OnFilterRead or something else that's somehow tied to the audio update rate rather than the regular frame rate.

    I guess I will have to experiment and see if the variable timing of non-audio events is noticeable when combined with precise audio beats, or whether the player glosses over that in his brain when perceiving both. If it's too harsh, I might always consider trading the precise audio for synced audio&video, might depend on the audio material I use, too.
     
  4. gregzo

    gregzo

    Joined:
    Dec 17, 2011
    Posts:
    795
    Hi CinnoMan,

    OnAudioFilterRead runs on the audio thread. As you cannot call any UnityEngine methods from another thread than the main thread, basing the pulse system on it seemed pretty useless: you'd have to spawn a third thread to avoid burdening the audio thread, which could do some processing and queue instructions to be performed on the main thread the next time it's updated. Not simple, and not many reasons to do it that way...

    Also, when you think about it, a 1/30th second delay between visuals and audio is not much: it is the discrepancy you get when you stand 10 meters away from the stage at a concert ( sound travels 300 m / s, 10 m in 1/30th of a sec ).

    Finally, don't trade audio precision for sync precision! The ear is quite sensitive to irregularity: calling Play() from the main thread ( against PlayScheduled ) will result in choppy pulses. G-Audio handles thread safe queuing of samples so that they are played at the exact requested time.

    Calling Play() and not PlayScheduled simply says: Play as soon as possible, which is at the next audio thread update, which has a frequency of bufferSize / sampleRate - ie 1/40th of a second at 44.1 khz and default buffer size ( 1024 samples per channel ). It may decrease latency slightly, but if you want sample accurate timing, there's simply no other solution than PlayScheduled, which the pulse system handles for you.

    All in all, don't try to fix what's not broken! You'll lose days of perfectly usable time.

    Cheers,

    Gregzo
     
  5. CinnoMan

    CinnoMan

    Joined:
    Jan 27, 2014
    Posts:
    16
    Hi Gregzo

    Thanks for the tips. You're right, I'm probably worrying too far ahead here. Going to just implement it this way and take care of other stuff for now. Rhythm games made with Unity such as Beatsneak Bandit seem to work well enough.

    Just for the sake of speculative theory-craft;
    My whole concern is that it's a variable un-sync, so to speak, which might be more noticeable than a constant latency (like in your concert example). Also, since I'm making an interactive experience, it's more like playing an instrument with a high latency; that could be far more noticeable (playing a midi keyboard with 33ms latency? won't feel good!). But that's just premature and speculative worrying... :D

    Cheers,

    CinnoMan
     
  6. gregzo

    gregzo

    Joined:
    Dec 17, 2011
    Posts:
    795
    Hi to all G-Audiophiles,

    Little heads up:

    v1.1 is shaping up to be a major release, with a much deeper editor integration:
    -Custom inspectors
    -Configure the player, it's tracks and effects in a single inspector window, monitor track levels
    -Microphone classes
    -I/O classes - stream or write wav files, route any audio stream to any receiver
    -Preview sounds and effects in the editor
    -Blazingly fast mixing on iOS thanks to Accelerate

    It will also probably change category in the asset store, from scripting to editor extensions.
    And will get a price bump.

    Many thanks for your patience! We will post a little poll soon to get feedback regarding your wishes. We will prioritize accordingly for v1.2.

    Cheers,

    Gregzo
     
  7. uniphonic

    uniphonic

    Joined:
    Jun 24, 2012
    Posts:
    130
    Sounds great! Thanks Gregzo!
     
  8. gregzo

    gregzo

    Joined:
    Dec 17, 2011
    Posts:
    795
    Yeah, it should make G-Audio much more comfortable to work with.

    It's a lot of work, but well worth it - I'm finishing the Player inspector, no more separate GameObjects for tracks, any number may be added just by pressing a button, and filters can also be added in slots( filtering order is now preserved ).

    4 slots for normal filters and 1 for reverb. Just pick a filter from a drop down menu, click add, and there you go! In play mode, or edit mode.

    Should be good!

    Cheers,

    Gregzo
     
  9. uniphonic

    uniphonic

    Joined:
    Jun 24, 2012
    Posts:
    130
    Can ASDR envelopes control filter cutoff?

    Thanks!
     
  10. gregzo

    gregzo

    Joined:
    Dec 17, 2011
    Posts:
    795
    Hi uniphonic,

    Not yet!

    Hard at work on the new inspectors, and the Editor integration. Here's a WIP screenshot of the new Player inspector:
    $Screen Shot 2014-03-12 at 19.57.33.png

    As you can see, filters are now super easy to add, and can be added to a whole player too.
    Plus this all works in edit mode…

    There's still a lot to do, I'll get to new filter param control features once the editor integration is complete, after v1.1.

    Many thanks for your patience!

    Gregzo
     
  11. CraigGraff

    CraigGraff

    Joined:
    May 7, 2013
    Posts:
    44
  12. gregzo

    gregzo

    Joined:
    Dec 17, 2011
    Posts:
    795
    Hi Craig,

    Count on it - Pulse classes will have popup sample selection. And will work in the editor, so that you can try things out smoothly…
     
  13. uniphonic

    uniphonic

    Joined:
    Jun 24, 2012
    Posts:
    130
    Hi Greg,
    In the Mini Seq example, the "Envelope Settings Offset:" seems to elongate the attack of the sample, but it also seems to add some noise at higher settings. The noise sounds like bit reduction. Any idea whats causing that noise? Are the samples being processed in a low bitrate?

    Thanks!
     
  14. gregzo

    gregzo

    Joined:
    Dec 17, 2011
    Posts:
    795
    Hi,

    No bit reduction, no attack elongation, it's just offset!

    Meaning, where in the original sample we cut.

    In the MiniSeq example, the envelope's length is mapped to bpm. The attack is very short. And chunks are normalized. Because of normalization, the greater the offset in the piano sample, the more white noise you'll get. Try switching off normalization in the envelope module: no noise, but much fainter sound at high offset values.

    Behind the scenes, whenever you change an envelope setting, G-Audio builds and caches a new sample, and manages memory so that all this can be done with zero garbage collection. It's one of the reasons why G-Audio will stay perfectly relevant when Unity 5 comes out : audio in Unity 5 adds tracks and mixers (finally!), but still relies on the Audioclip/Audiosource tandem, which while great for 3d audio is extremely cumbersome for sample processing.

    Btw, work on v1.1 is going well, it'll be a huge update.

    Cheers,

    Gregzo
     
  15. uniphonic

    uniphonic

    Joined:
    Jun 24, 2012
    Posts:
    130
    Oh, so it's sample start offset? That makes sense now, that it's just ignoring the first part of the sample, with normalization to bring up the volume in the remaining part. Labeling it under "Envelope" was a little confusing to me, because because I think of the ASDR as the envelope. Thanks for the explanation!

    Any idea when to expect 1.1? :)

    Thanks!
     
  16. gregzo

    gregzo

    Joined:
    Dec 17, 2011
    Posts:
    795
    Hi uniphonic,

    Sorry for the confusion! G-Audio has two different kinds of envelopes, ADSR which works in real time and GATEnvelope, which is updated on the main thread. GATEnvelope allows to pre-process samples, wave-table synthesis like, and works in tandem with GATActiveSampleBank to cache processed results. GATEnvelope settings are really about cutting a slice of a sample, applying fade in and out, normalizing and reversing. Very handy for the granular synthesis features coming soon!

    v1.1: Editor integration with audio is very challenging - especially considering I'm doing my own memory management. It's working now, just polishing these inspectors and transitions between play / edit / compiling so that errors don't pop up too often!

    It'll be a HUGE update: memory fragmentation graph window, samples playable from any inspector, and maybe in 1.1 but probably a bit later, a node based interface to visualize the links between pulses, sample banks, samples and envelopes.

    Before the end of next month, I very much hope!

    Cheers,

    Gregzo
     
  17. Nifflas

    Nifflas

    Joined:
    Jun 13, 2013
    Posts:
    118
    Hey! I just want to show some things I've been using G-Audio for.

    First, a small free game called 7 Nanocycles. Video here, Download here. The neat thing about the game is that it doesn't use audio loops, but all music is composed in my own sequencer that plays its sequences through G-Audio. Here's a Unity Web Player demo of the music layout alone.
     
    Last edited: Mar 24, 2014
  18. gregzo

    gregzo

    Joined:
    Dec 17, 2011
    Posts:
    795
    Awesome Nifflas!

    Very glad to see you putting G-Audio to good use!

    Nearly done with v1.1, you'll have as much low level control, plus all the handy inspectors.

    I've been so busy with it all lately I haven't taken the time to showcase your demos yet - code first, com later!

    Cheers,

    Gregzo
     
  19. metaphysician

    metaphysician

    Joined:
    May 29, 2012
    Posts:
    190
    hmmm...in my WebPlayer demo, nearly all the sounds are silent. i get nothing like what you're showing in the game video clips. all i get are the pads, and occasionally the scale change works. i'm on Chrome 33 on a Mac 10.8 machine. i tried this out in Safari 6.0, and got a fairly similar response. so this may be a G-Audio bug in Webplayer builds. recently i tried to show Gregzo's test demo to my class using Chrome and Firefox. neither one of those made a sound. at the time i thought that it was just a fluke, but after seeing this behave in a similar way i'm beginning to have some doubts.

    yup - looks like something's broken. i'm not getting any audio on my Mac for Gragzo's demo now, but i've been OK in the past. the laptop with Safari 7 is fine. so maybe it's a glitch on my browsers. i'll do some more troubleshooting.

    at any rate Nifflas, powerful stuff, at least what i could tell of the video demo. very well done!
     
  20. uniphonic

    uniphonic

    Joined:
    Jun 24, 2012
    Posts:
    130
    Works for me, in OSX 10.9.2, Chrome 33.

    Sounds great too!
     
  21. gregzo

    gregzo

    Joined:
    Dec 17, 2011
    Posts:
    795
    Hi metaphysician,

    Sorry to read you're experiencing problems.

    Just tested on 2 machines - iMac running Mountain Lion and MacBook Pro running Mavericks. No issues, tested both Safari and Firefox.

    Did you check your machine's Audio Midi Setup utility, to make sure the sample rate is 44.1khz?

    Also, are you only having trouble with web players, or also in the editor / in standalone builds?

    And what OS's are your machines running?

    If there's a problem, I'd like to get to the bottom of it! As much info as possible would help!

    Ah, nearly forgot: did you try cleaning your' browser's cache?

    Cheers,

    Gregzo
     
  22. uniphonic

    uniphonic

    Joined:
    Jun 24, 2012
    Posts:
    130
    Another idea is to check your web player version:
    http://unity3d.com/webplayer/version

    And maybe update it:
    https://unity3d.com/webplayer
     
  23. gregzo

    gregzo

    Joined:
    Dec 17, 2011
    Posts:
    795
    Hi to all,

    The next, editor integrated, inspector friendly, coffee making version of G-Audio is done.

    I still need a few more days to cleanup the code and update the docs, but it's there and functional all right.

    If any G-Audio user wants to try it out and give me early feedback, please send me a PM or an e-mail, I'll send you the beta.

    I hope to submit to the store in a week or so, so official release shouldn't be before mid-april.

    I hope you will appreciate the effort that went in this new version, and how cool it is to do so much in edit mode!

    Cheers,

    Gregzo
     
  24. uniphonic

    uniphonic

    Joined:
    Jun 24, 2012
    Posts:
    130
    Thanks Gregzo! I just PMed you.
     
  25. gregzo

    gregzo

    Joined:
    Dec 17, 2011
    Posts:
    795
    Introduction video for beta testers:

     
  26. thiagoeo

    thiagoeo

    Joined:
    Apr 3, 2014
    Posts:
    1
    Hi gregzo

    I'd like to play 3 audio tracks, one in the first channel, another in the second e another in the remaining four. Is it possible to do this with G-Audio? How?

    Thank you!
     
  27. gregzo

    gregzo

    Joined:
    Dec 17, 2011
    Posts:
    795
    Hi to all,

    Great news for G-Audiophiles: 48 kHz playback will finally be supported in the upcoming release.

    We will make an announcement shortly regarding G-Audio 1.1.

    In the meantime, thank you for your patience - 1.1 grew into a lot more than an incremental upgrade, and we are thrilled to let it out in the open very soon.

    Cheers,

    Gregzo
     
  28. gregzo

    gregzo

    Joined:
    Dec 17, 2011
    Posts:
    795
    Announcements

    G-Audio 1.1 has just been submitted for review to the asset store and should be available shortly.

    1.1 is a huge update, bringing full support for previewing procedural audio in edit mode through a suite of custom inspectors and windows.

    48 kHz audio is now also fully supported.

    G-Audio 1.1 will be priced at $60 - just a few days left at $30!

    These developments have been made possible thanks to a new addition to the G-Audio team:
    Anthony ( username: neuromorph ) is helping out with project management, outreach and design.
    A warm welcome to him!

    When 1.1 releases, we'll announce our brand new website featuring video tutorials, comprehensive documentation, demos and forums.

    We look forward to hearing what you make with G-Audio!

    Gregzo
     
  29. uniphonic

    uniphonic

    Joined:
    Jun 24, 2012
    Posts:
    130
    Woo hoo! I'm excited to get the final version. I love this system so far. :)
     
  30. uniphonic

    uniphonic

    Joined:
    Jun 24, 2012
    Posts:
    130
    G-Audio 1.1 is now showing up in the Asset Store! The new things in the release notes sound nice indeed. Looking forward to trying it out. :)

    Jacob
     
  31. gregzo

    gregzo

    Joined:
    Dec 17, 2011
    Posts:
    795
    Hi to all,

    We're very happy to announce that G-Audio 1.1 is out in the wild.

    We look forward to hearing what you make with it, and to take your feedback into account for further release!

    Do visit our brand new website, it features video tutorials, documentation and forums.

    G-Audio 1.1 comes with the "Break-Me" demo, which demonstrates the engine's robustness by gradually pushing a procedural music scene from 240 BPM to 150'000 BPM and beyond - have a go and try not to cringe too much when individual samples begin to merge into noise - believe us, it's safe.

    If you were already extending G-Audio core classes, a few API changes might impact you. Please refer to the read me file included in the package for a complete list of changes. None of them should prove game changers - they mostly relate to laying solid ground for the upcoming I/O system.

    As always, we're here to help!

    Cheers,

    Gregzo
     
  32. musashii

    musashii

    Joined:
    Feb 16, 2014
    Posts:
    1
    So where are the microphone classes you were talking about?? Are they in the 1.1 release?
     
  33. gregzo

    gregzo

    Joined:
    Dec 17, 2011
    Posts:
    795
    Hi musashii,

    I'm sorry, I didn't get notified of your post for some reason.

    The microphone classes didn't make it in 1.1, unfortunately. I had to focus on polishing all the new in-editor stuff first. Now that this is done, mic is next. The classes are functional, simply not quite polished enough: I'm aiming for complete integration of the mic input, record it, play it through a track, record it in memory, pipe it through native iOS classes for true pitch shifting, etc… I'd like to do this right, thanks for your patience!

    Gregzo
     
  34. gregzo

    gregzo

    Joined:
    Dec 17, 2011
    Posts:
    795
    Hi to all,

    A mini update just went live today, mainly adding compatibility with Audial Manipulators, a set of filters that work on any AudioSource.

    The publisher now also provides specific versions of most of the filters which integrate into G-Audio's mixer, just like any G-Audio filter.

    He also kindly offers 3 of them for free to G-Audio users ( they're included in the new update ): Simple Delay, Saturator and Distortion ( a fancier one than G-Audio's ). Other G-Audio compatible filters include Reverb, Bit-Crusher, Compressor, and Foldback Distortion.

    In addition, a new class for looping samples ( GATLoopingSample ) and methods to clear the player's playing and scheduled samples, without any nasty pops - see the read me file for details.

    Next version will be 1.2, with microphone support and the ground laid for the upcoming iOS specific add-on ( which will be very cheap at first for our early adopters ). The iOS add-on will at first support streaming user files and Dirac filtering ( realtime time-stretching and true pitch shift ), both of streamed files and GATData audio. True pitch shift will also be applicable to entire G-Audio tracks, as well as to the microphone input. Should be fun!

    Cheers,

    Gregzo
     
  35. gregzo

    gregzo

    Joined:
    Dec 17, 2011
    Posts:
    795
    Hi again,

    Finalizing the microphone classes these days.

    If you'd like to test the new package early, do send me a pm with your invoice nb and your e-mail, I'll send you a dropbox link.

    Cheers,

    Gregzo
     
  36. web0nz

    web0nz

    Joined:
    Jan 23, 2013
    Posts:
    1
    Hey gregzo,

    Just found G-Audio and am considering using it for my music-driven game, Signal to Noise. I'm wondering about one feature, and if it doesn't exist yet, I'd be glad to work with you on it: routing audio from external applications to an audio source in Unity.

    ~David
     
  37. gregzo

    gregzo

    Joined:
    Dec 17, 2011
    Posts:
    795
    Hi David,

    G-Audio by natire facilitates piping audio in: you can easily pass a pointer to G-Audio's buffers into unmanaged land and fill that in realtime. I'll have an example out soon which routes a native iOS audio file streamer through Dirac and into G-Audio's mixer.

    What platforms / applications do you have in mind? Of course, any help is always welcome. Do tell me more.

    Cheers,

    Gregzo
     
  38. uniphonic

    uniphonic

    Joined:
    Jun 24, 2012
    Posts:
    130
    Could this lead to iOS Audiobus support at some point in the future? I read that Unity not being able to handle background audio was something that was keeping Unity devs from using Audiobus. It seems that some people have been able to get background things working:
    http://forum.unity3d.com/threads/11...n-background-for-audio-generation-(iOS)/page2

    And with G-Audio doing it's own mixing, it seems like it wouldn't even need background audio support to make it work; all it would need is a native plugin to pass the data to? :) I'm hoping!

    Thanks!
    Jacob

    P.S. G-Audio rocks!
     
  39. gregzo

    gregzo

    Joined:
    Dec 17, 2011
    Posts:
    795
    Hi Jacob,

    Just voted on the background audio feature, somehow I still hadn't.

    When it becomes possible to support AudioBus without killing Unity's audio altogether, G-Audio will do it.

    Finalizing G-Audio 1.2, working on examples: a simple looper with mic input, à la EveryDayLooper is nearly done.

    Cheers,

    Gregzo
     
  40. uniphonic

    uniphonic

    Joined:
    Jun 24, 2012
    Posts:
    130
    Sounds great Gregzo!

    Also, I just found this article, that seems to indicate that background audio is currently possible in Unity:
    http://wiseman-safiq.blogspot.com/2010/11/ios-executing-code-in-background.html

    It says it's possible via an info.plist file. That was written back int 2010 though, and maybe things have changed since then that would prevent it?

    In the thread here:
    http://forum.unity3d.com/threads/11...-app-in-background-for-audio-generation-(iOS)
    the Unity engine developer responding to the thread seemed to talk about modifying the info.plist file as well.
     
  41. gregzo

    gregzo

    Joined:
    Dec 17, 2011
    Posts:
    795
    Hi Jacob,

    Was aware of the thread you linked, unfortunately the dev who managed to get things running in the background also specifies that all audio calls have to go through a native implementation, and that it crashes after a few minutes. Basically, Unity doesn't support it, and the one way to do it properly is to kill FMOD and do all audio natively.

    Bottom line: possible, yes, but at the cost of compatibility with Unity's audio and a LOT of work. And all that for a hacky solution that isn't guaranteed to work properly…

    The bottom line is that Unity is awesome in many ways, but when you need to do some lower level stuff, it sometimes throws brick walls at you.

    We can still hope, and vote!

    Cheers,

    Gregzo
     
  42. gregzo

    gregzo

    Joined:
    Dec 17, 2011
    Posts:
    795
    Hi to all,

    1.2 has just been submitted to the asset store and will go live very soon.
    As usual, e-mail or pm if you want this version early!

    1.2 finally brings I/O to G-Audio, including microphone support and outputting wav files.
    The implementation is completely modular, which means lot's of flexibility: any audio stream can basically be routed anywhere.

    Examples:
    -Mic input to track
    -Unity Audio Source to track
    -Track to file
    -Mic input to file
    -Track to cache
    -Mic to cache
    -Player output to file
    etc…

    All this can be achieved by adding I/O components to game objects - see the Looper Scene included in the package, and featured here.

    A summary of the new classes and components is available in the read me, and the doxygen docs are being updated.

    3 more cool facts:

    1) The MicrophoneModule only allocates a tiny clip ( 1 second ), and streams the audio live. Result: no need to decide in advance how much you want to record, and no added latency when you wish to start recording.

    2) Both StreamToCacheModule and StreamToWavModule enable sample accurate recording: want to start writing that wav file on that precise beat, and stop exactly 3'600'987 samples later? No problem.

    3) This modular I/O system enables routing normal Unity AudioSources through G-Audio's mixer: re-mastering a stereo track live is now possible.

    Cheers,

    Gregzo
     
  43. uniphonic

    uniphonic

    Joined:
    Jun 24, 2012
    Posts:
    130
    Wow. That's awesome! You rock, Gregzo!

    So does cool fact 3 mean that very long (streamed from disc) audio sources can now play back through G-Audio too?

    Thanks!
    Jacob
     
  44. gregzo

    gregzo

    Joined:
    Dec 17, 2011
    Posts:
    795
    Hi Jacob,

    Yes indeed, any AudioSource can be routed through G-Audio's mixer.

    Step by step:

    1) Create an AudioSource
    2) Add a SourceToStreamModule to it: this will convert the AduioSource's output in a G-Audio audio stream
    3) Add a StreamSplitterModule: this will split the interleaved stream in individual mono streams
    4) Add one StreamToTrackModule per channel, and chose which track to output to

    Done!

    This might seem like a lot of steps, but there are many advantages to the modular approach: it's more flexible, and you can mix and match as needed.
    If you wanted to route a single AudioSource to a wav file, for example, you wouldn't need to split the interleaved stream:

    AudioSource
    SourceToStreamModule
    StreamToWavModule

    You could also simultaneously write the microphone's output to disk, send it to a track, and cache the filtered output of the mic in memory:

    MicrophoneModule
    StreamToWavModule ( input: stereo mic stream )
    StreamSplitterModule ( input: stereo mic stream, outputs 2 mono streams )
    StreamToTrackModule ( input: split stream 0, route to track 0 for example )

    And on the GATPlayer object:
    StreamToCacheModule( Input: player track 0 )

    This would write to disk the unfiltered output of the mike, and simultaneously send one of the channels to track 0, filter it if track 0 is filterd, and cache the filtered output in memory ( you can control precisely when caching starts, and how long it lasts ).

    Do try the looper scene, it showcases all of the I/O components. It can't write to disk in the web player, obviously, but that aside is perfectly functional.
    In it, unfiltered mic input is cached, so that every track in the looper can be filtered in realtime.

    Cheers,

    Gregzo
     
    Last edited: May 23, 2014
  45. uniphonic

    uniphonic

    Joined:
    Jun 24, 2012
    Posts:
    130
    Fabulous! Love that it's so flexible. Its capabilities have exceeded my expectations!

    Thanks,
    Jacob
     
  46. gregzo

    gregzo

    Joined:
    Dec 17, 2011
    Posts:
    795
    Hi to all,

    G-Audio 1.3 is in the works, and will focus on adding music oriented classes and components. Here's what's brewing:

    - Scala format support ( .scl ): import 4'000 + scales from the largest repository in the world to easily experiment with exotic tunings, or build your own with the GATScale class.

    - New sample bank class: GATInstrumentBank works with midi codes. Load it with an incomplete set of samples, and it will interpolate the missing ones upon request. With the above mentioned Scala support, this means that with just a few samples, you can play patterns in any, western or non-western scale.

    - Pitch detection for sound banks ( beta ): G-Audio will analyze your samples and assign a midi code to each, saving a lot of dull work. Thanks to this, instrument banks can be created with minimal effort. G-Audio's pitch detection algorithm is over 90 % accurate for most instruments, and will warn you when it is unsure of it's results.

    - ScalePattern classes: previous G-Audio pattern classes worked with samples, without any knowledge of their musical function. 1.3 will bring ScalePattern, which enables modulating a pattern over a scale's degrees or different scales altogether.

    - New Play method, particularly handy for sequencers - schedule a sample's start time, end time, and fade out duration in one line of code:

    Code (csharp):
    1. GATManager.DefaultPlayer.PlayData( myAudioData, trackNb ).SetEnd( 44100, 11025 );
    2. // Plays the first 44100 samples of myAudioData through track trackNb,
    3. // with the last 11025 samples gradually faded out
    Fading is completely framerate independent ( handled at buffer level ), and non destructive. No extra allocations occur.

    As always, your comments and suggestions are welcome!


    Cheers,

    Gregzo
     
  47. davylew

    davylew

    Joined:
    Mar 14, 2014
    Posts:
    10
    hi gregzo
    I am very newbie for Audio programming,Can I using GAudio create a human voice chorus instrument , I recording a voice sample push difference pitch button , get constantly sound . the length of sound depend on push button time . like strings timbre options on my electronic piano.
    thx.
     
  48. gregzo

    gregzo

    Joined:
    Dec 17, 2011
    Posts:
    795
    Hi davylew,

    G-Audio has built-in ADSR envelope support, with automatically crossfaded loop section and zero-crossings finding.

    That might do, depending on the effect you're after. You could also try a granular synthesis approach, also supported by G-Audio: the basic idea is to cut your sound in tiny grains, and play back lots of these continuously. The "Break Me" demo on our website uses this with piano notes to build the continuous drone you can hear.

    Or you could simply use crossfades to continuously spawn new samples without any interruptions.

    All these options are possible, and quite easy to implement. It really depends on the effect you're after, and the samples you're working with.

    If you send me a sample, I can send you a WebPlayer demonstrating the first 2 techniques in very little time.

    Cheers,

    Gregzo
     
  49. davylew

    davylew

    Joined:
    Mar 14, 2014
    Posts:
    10
    thx gregzo

    I upload two sample, thx for your help.
     

    Attached Files:

  50. davylew

    davylew

    Joined:
    Mar 14, 2014
    Posts:
    10
    and the choir effect I want~~
     

    Attached Files: