Search Unity

  1. Megacity Metro Demo now available. Download now.
    Dismiss Notice
  2. Unity support for visionOS is now available. Learn more in our blog post.
    Dismiss Notice

[RELEASED] G-Audio : 2D Audio Framework

Discussion in 'Assets and Asset Store' started by gregzo, Jan 20, 2014.

  1. gregzo

    gregzo

    Joined:
    Dec 17, 2011
    Posts:
    795
    Ok, I see.

    What you need is a MIDI plugin, not G-Audio. There is one on the asset store, I think it requires Unity Pro though. It's in the Editor Extensions section.

    You could use G-Audio to build a player that would do what you want, but it would take quite a bit of work and if you're not experienced in audio programming at all, I wouldn't advise it.

    Let me know if I can help further down the road!

    Cheers,

    Gregzo
     
  2. CinnoMan

    CinnoMan

    Joined:
    Jan 27, 2014
    Posts:
    16
    Hi Gregzo,

    I was using an older version of G-Audio (1.0x) to drive my beat-synced game (c.f. earlier in this thread), now I'd like to move to the newest version. However, it seems like there's been some major changes to the pulse system, is that correct?

    I had done some very minor modifications to the PulseModule base class, the abstract AGATImpulseClient class and the IGATImpulseClient interface, but it seems those aren't even part of 1.2x anymore. Is there a version history documentation, or some other resource that would facilitate the move to the new pulse system for me? Do you maybe have other pointers you could give me as to how to make the change? That would be awesome!
     
  3. gregzo

    gregzo

    Joined:
    Dec 17, 2011
    Posts:
    795
    Hi CinnoMan,

    Plenty of new stuff to play around with since 1.x!

    API changes are logged in the ReadMe file. Regarding pulse classes, they were simplified in 1.1:
    Code (CSharp):
    1. -IGATImpulseClient is no more - PulseModule classes now only fires OnPulse. This greatly simplifies the pulse system.
    2. A new subscribable delegate, onWillPulse, fires even when individual steps are bypassed.
    3.  
    4. You may also implement IGATPulseController to receive
    5. a callback just before the next pulse is updated.
    6.  
    7. Summing up the order of pulse events:
    8. 1) OnPulseControl( PulseInfo previousPulseInfo ). There can be only one pulse controller, which should implement IGATPulseController.
    9. 2) PulseInfo is updated.
    10. 3) onWillPulse fires. The delegate is public, anyone may subscribe. Useful for envelopes, which might need to update ( if the pulse changes ) before the pulse event.
    11. 4) OnPulse fires, on checked steps only. The delegate is not public: implement IGATPulseClient to subscribe.
    12.  
    13. As a direct consequence of these changes,
    14. AGATImpulseClient is now AGATPulseClient.
    As a general rule, it's always dangerous to change classes from a framework. Much better to add your functionality by subclassing or writing separate classes which wrap or make use of the framework's.

    Do join our forums! It's beginning to show some signs of life, and I'm always quick to reply there.

    Cheers,

    Gregzo
     
  4. CinnoMan

    CinnoMan

    Joined:
    Jan 27, 2014
    Posts:
    16
    Thanks for that, will do.
     
  5. akarsh

    akarsh

    Joined:
    Aug 26, 2014
    Posts:
    11
    Hi,
    We are looking to buy G-Audio plugin,Will this Plugin is able to perform the following functionality..
    I'll try to explain the requirement as clearly as i can.
    >We have pre-existing audio files.
    >We need to input sound from the microphone.>the input from the microphone will be based on how long the user speaks.
    >The existing audio files that are already stored will be cut to match the length of the microphone input.
    >Now, the existing audio file that has been cut will be outputted like the input microphone audio(that is modulating the existing audio file to sound like the microphone input i.e. same waveform.)
    Is this requirement possible with your plugin? Is it possible for you to arrange a demo that would demonstrate this kind of functionality?
    I hope I have been clear, if there's still some confusion or misunderstanding, please ask.
     
  6. gregzo

    gregzo

    Joined:
    Dec 17, 2011
    Posts:
    795
    Hi,

    I'll break down what G-Audio does and doesn't cover:

    1) Handling pre-existing audio files : YES, ogg and wav
    2) Handling microphone input: YES
    3) Detecting when mic input is above a certain threshold and start recording: NO but quite simple to implement
    4) Morphing 2 sets of audio data: NO, you'll need a specialised library for that. Audio morphing is a complex dsp effect, not trivial at all!

    Cheers,

    Gregzo
     
  7. akarsh

    akarsh

    Joined:
    Aug 26, 2014
    Posts:
    11
    Hi Gregzo,

    Will you please arrange a demo so that we can check it out whether it can satisfy our need or not.

    Thanks & Regards,
    Akarsh
     
  8. gregzo

    gregzo

    Joined:
    Dec 17, 2011
    Posts:
    795
    Hi Akarsh,

    As I mention in my last post, audio morphing is not included in G-Audio and requires a specialized library. I have found this one: http://www.cerlsoundgroup.org/Loris/ , for which you could write a plugin.

    Cheers,

    Gregzo
     
  9. mrleon

    mrleon

    Joined:
    Sep 27, 2014
    Posts:
    18
    Hi, I just got G-Audio yesterday and I'm trying to follow the Getting Started video on the website, but I I've run into an immediate issue -- when I 'Hierarchy > Create > G-Audio > Sample Bank' the Sample Bank Wizard does not appear so I can't create a Sample Bank. I've tried reinstalling Unity (version 4.5.4f1). All the other G-Audio create items work it seems. I'm using a mac (OS X 10.9.5), does anybody else have this problem? Is there a workaround?
     
  10. gregzo

    gregzo

    Joined:
    Dec 17, 2011
    Posts:
    795
    Hi,

    Welcome to G-Audio!

    Are you getting any errors?

    I'm currently on holiday ( back Tuesday ). In the meantime, you can hack around by using a copy of one of the example scenes which have Sample Banks you can tweak, or try adding a GATActiveSampleBank script manually to a GO.

    Also, if you can bother with the extra account, I do most support on dedicated forums here: http://www.g-audio-unity.com/forums/

    Apoligies for the issue and timing, and looking forward to helping you with G-Audio!

    Gregzo
     
  11. gregzo

    gregzo

    Joined:
    Dec 17, 2011
    Posts:
    795
    G-Audio 1.31 has just been submitted to the asset store and should go live some time next week.

    It's a stability release, focusing on crushing quite a few creepy crawlies:

    - Fixed padding issue when parsing certain types of ogg files( OggFile.ReadNextChunk )
    - Fixed accessing streaming assets on iOS
    - New speaker mode setting( GATManagerinspector ): force stereo or align with the platform's
    driver caps. Quad and higher should now work properly, both in the editor and in builds
    - Newclass: GATFilerParam - facilitates access in code to filter parameters
    - Reworked LFOFilterParam to make use of GATFilterParam
    - Fixed a rare edit mode bug where GATPlayer's inspector would lock in an irreversible fail state

    As always, get in touch by pm or e-mail if you would like early access.

    Cheers,

    Gregzo
     
  12. fizzd

    fizzd

    Joined:
    Jul 30, 2013
    Posts:
    21
    Hi gregzo,

    I'm making a rhythm game (a very very timing-strict one) and so far Unity's DSPTime has been sufficient for calibration.

    Now though, I want to be able to play sounds on the fly, in time with the music. And I have come across the startling discovery in Unity there is no easy way to play a sample multiple times with good accuracy! Like a metronome. It appears I can't do something like:


    Start(){
    AudioSource.PlayScheduled(1.0 * 60/bpm)
    AudioSource.PlayScheduled(2.0 * 60/bpm)
    AudioSource.PlayScheduled(3.0 * 60/bpm)
    }

    That will just overwrite everything with the last command. It's insanely frustrating! What my plan is now is to make some hot seat AudioSource switching system, since it seems each AudioSource can only remember one scheduled time to play. Or maybe just create a wrapper PlayScheduledWithoutGoddamShortTermMemory function that creates an AudioSource each time you call it.

    And secondly, if G-Audio can do this, can it do it in sync with a music track? And if the music track needs to be played in G-Audio as well, must that music track be uncompressed in memory? It's cause I have loads of tracks in my game and it would be absolutely huge in memory if that's needed.

    The library I was using in AS3, called StandingWave3, managed to do perfect sound syncing with mp3s. I hope G-Audio can be the saviour of my Unity audio problems too!
     
  13. gregzo

    gregzo

    Joined:
    Dec 17, 2011
    Posts:
    795
    Hi fizzd!

    Great little game, made me think of Duet, but easier to grasp and to get into rhythmically. Should definitely be expanded into something more polished, imo.

    G-Audio should be a good fit: it was built for 2D audio.

    With G-Audio, you have many different ways of playing sounds, depending on the amount of control you need on them. At the most basic, you just ask the player object to play a mono blob of data at a time through a track:

    Code (CSharp):
    1. public string sampleName;
    2. public GATSampleBank bank;
    3.  
    4. void Start()
    5. {
    6.      GATData sampleData = bank.GetAudioData( sampleName );
    7.      double playTime = AudioSettings.dspTime;
    8.      float gain;
    9.      int trackNb;
    10.  
    11.      for( int i = 0; i < 20; i++ )
    12.      {
    13.           playTime += ( double )Random.Range( .1f, 1f );
    14.           gain = Random.value;
    15.           trackNb = i % 2;
    16.           GATManager.DefaultPlayer.PlayDataScheduled( sampleData, playTime, trackNb, gain );
    17.      }
    18.    
    19. }
    This will play the sample 20 times at random intervals( .1 to 1s ), alternating between tracks 0 and 1, at a random gain. Samples will overlap, no problem.

    There are many other ways to play audio: bypassing tracks altogether, or wrapping the data blob in G-Audio's equivalent to AudioSource ( GATRealtimeSample ) for control over pitch, looping, and sample accurate fades.

    Longer tracks can be played through a standard AudioSource ( G-Audio uses the same time reference, so they'll sync fine ), but sometimes( depending on the platform ) PlayScheduled and timeSamples aren't accurate enough when streaming compressed audio - in that case, G-Audio can handle streaming of wav and ogg files at a lower level - it's not as easy but I can always help.

    All in all, I hope you chose G-Audio for your project! Do have a look at our forums, you'll notice that I'm always keen to help out users.

    Cheers,

    Gregzo
     
  14. fizzd

    fizzd

    Joined:
    Jul 30, 2013
    Posts:
    21
    Thanks for the quick reply! I've read a bit about G-Audio and I guess the big thing is that literally all i need is a better PlayScheduled function, as your PlayDataScheduled seems to be. I don't need anything else - and definitely not audio with any kind of randomness -and I'm wondering if it might be overkill for me to buy a $60 asset just for that. It's like paying $50 for a beat detection asset :p

    You must be super experienced in all things Unity Audio by now, so I'd like to ask: can you make a metronome just by using Unity's SetSampleData functions? Like, create a blank, super long Audio Clip and fill it with the copied SampleData from a blip sound every N samples? (I know this isn't directly about G-Audio, but I'd be very grateful for a bit of your wisdom if you don't mind!)
     
  15. gregzo

    gregzo

    Joined:
    Dec 17, 2011
    Posts:
    795
    Hi again,

    It is possible to use SetData on a looped clip ( doesn't even need to be that long - preserving RAM is important ), that's how I approached things when I built uPhase+. But since then, Unity's audio API has somewhat opened up, allowing much more efficient and less hacky 2D audio mixing. In fact, all that uPhase+ can do can be achieved with G-Audio with exactly zero lines of code, in the editor, and without even hitting play...

    If you're still on the fence about spending $60, do PM me, we might be able to arrange something for you.

    Cheers,

    Gregzo
     
  16. fizzd

    fizzd

    Joined:
    Jul 30, 2013
    Posts:
    21
    holy crap you made a music tool in Unity?! That's pretty amazing. Some time back I was wondering if you could have DAW-like latency in a Unity app on the iOS. It didn't appear so as every music app I used that was Unity made still fared worse than native DAWs because ultimately it still uses OpenAL which is slower than the AudioUnits framework that all the serious music apps use.

    Anyway, I'm intrigued by what you mean by Unity's audio API being opened up recently. I feel like I am where you were many many years ago, in terms of experience with audio in Unity.
     
  17. gregzo

    gregzo

    Joined:
    Dec 17, 2011
    Posts:
    795
    Yeah, it was my very first piece of software... Pretty crappy code architecture, and quite funny/sad to think that with G-Audio it could all be done today more efficiently and without breaking a sweat.

    Anyway, the one major change in Unity's audio API since then is the addition of OnAudioFilterRead, a mixing callback that runs on the audio thread and which G-Audio uses to mix everything. But it's not very well documented, and can be buggy if not handled properly - see my multiple rants about it. Once one has understood the callback's limitations, it is sufficient to handle any scenario: just do everything "by hand" and pipe the mix through a single AudioSource's OnAudioFilterRead. That's essentially what G-Audio does for you.

    About Audio Units: of course, it'll always be more efficient to use those on iOS. They're also much more user friendly since iOS 8 introduced the AVAudioEngine API, which essentially nicely wraps Core Audio's arduous C API in a friendly Obj-C one.
     
  18. fizzd

    fizzd

    Joined:
    Jul 30, 2013
    Posts:
    21
    Ah ok, yeah I think I read somewhere examples in the documentation using it to generate a sine wave, but I wouldn't have thought of using OnAudioFilterRead for a full blown music engine. Impressive. :D And I don't know how I missed the announcement of AVAudioEngine. Thanks for the heads up!
     
  19. gregzo

    gregzo

    Joined:
    Dec 17, 2011
    Posts:
    795
    OnAudioFilterRead can be misused as an audio rendering callback: it executes on the audio thread, and nothing forbids you to pipe all you want through a single AudioSource. The callback is fired after spatialization if any 3d sound is playing, otherwise it acts as a 2D mixing callback so doing your own mixing there is a viable option.

    Unity's opaque mixer will handle mixing that with other AudioSource outputs behind the scenes, but if all audio is piped through that one source, you can pretty much run the show yourself.

    Unity 5 will add a lot to that setup, but the basic pipeline will stay the same: an AudioSource component handles spatialization of a single AudioClip, and that's that. Makes sense in 3D, but for 2D audio it's a bit absurd not to be able to simply schedule playback of blobs of data at accurate times, regardless of wether they overlap or not. I'll rewrite some of G-Audio's core components to take advantage of Unity 5's new tracks / mixers assets, and keep that convenient fire and forget behavior G-Audio has: play this at that time and gain, no additional component, no pooling of AudioSource, just arrays of data being queued for playback.
     
  20. iddqd

    iddqd

    Joined:
    Apr 14, 2012
    Posts:
    501
    Hey Gregzo

    So i bought G-Audio today. Do you have an idea when G-Audio will be compatible with Unity 5?

    I did the Unity Auto-API update which changes .audio to GetComponent<AudioSource>() and so on.., then uncommented the speaker setup section and assigned a mixer channel to the main AudioSource, but i was still unable to get a sound out of G-Audio.

    I did see your post here:
    http://www.g-audio-unity.com/forums/topic/important-g-audio-and-unity-5/

    Do you have temporary instructions to make it compatible?

    Many thanks.
     
  21. gregzo

    gregzo

    Joined:
    Dec 17, 2011
    Posts:
    795
    Hi!

    GATPlayer, line 405-445( in Awake() ):
    Code (CSharp):
    1. if( GATManager.UniqueInstance.SupportedSampleRates == GATManager.SampleRatesSupport.All )
    2. {
    3.     //stuff
    4. }
    5. else
    6. {
    7.     //More stuff
    8. }
    Get rid of it all. It's code that was there to hack around a bug in OnAudioFilterRead when sample rate was not 44100 Hz, and it's now both obsolete and the source of the problem.
    Instead, you can add the following check( still in Awake() ):

    Code (CSharp):
    1. this._audio = this.GetComponent< AudioSource >(); //add AudioSource _audio private variable to GATPlayer
    2. _audio.playOnAwake = false;
    3.  
    4. #if UNITY_5_0
    5.             if( _audio.clip != null )
    6.             {
    7.                 _audio.clip = null;
    8. #if GAT_DEBUG
    9.                 Debug.LogWarning( "As of Unity 5, GATPlayer's AudioSource's clip should be null" );
    10. #endif
    11.             }
    Let me know if that's working out. I'll submit an update soon which should wrap all this cleanly, for now this should work fine.

    If you get a never-ending null ref from the player in edit mode, just hit play and that should fix it. Disabling and re-enabling the GATPlayer component is worth a try as well.

    Let me know if you encounter any more issues,

    Gregzo
     
  22. iddqd

    iddqd

    Joined:
    Apr 14, 2012
    Posts:
    501
    Hi Gregzo

    Many thanks for the quick fix. Unfortunately i have not yet been able to test it, since i had to go back to unity 4.6 due to a Unity UI bug :( - but hopefully others can profit from your instructions.

    I'm busy getting into G-Audio with very good results so far :)

    Great package!
     
  23. iddqd

    iddqd

    Joined:
    Apr 14, 2012
    Posts:
    501
    I'm trying to add a filter from a string, so basically instead of:

    Code (csharp):
    1. _trackFiltersHandler.AddFilter<GATDistortion>(slotIndex);
    i would like to use something such as:

    Code (csharp):
    1. string filterType = "Distortion";
    2. _trackFiltersHandler.AddFilter<AGATMonoFilter.FilterTyperForName(filterType)>(slotIndex);
    But that doesn't work like that. Any idea how i can accomplish this?
     
  24. gregzo

    gregzo

    Joined:
    Dec 17, 2011
    Posts:
    795
    Indeed!

    You can't just pass an unknown, runtime resolved type as a generic T.
    You'll need to use reflection:
    Code (CSharp):
    1. // don't forget to include System.Reflection
    2.  
    3. AGATMonoFilter AddFilterNamed( string filterName, GATFiltersHandler handler, int slot )
    4. {
    5.      MethodInfo addFilterMethod = typeof( GATFiltersHandler ).GetMethod( "AddFilter" ); // you can cache this for performance
    6.      System.Type filterType = AGATMonoFilter.FilterTypeForName( filterName );
    7.      MethodInfo genericAdd = addFilterMethod.MakeGenericMethod( type ); // this will most probably fail on AOT platforms as it relies on JIT compilation: avoid on mobile!
    8.      return ( AGATFilter )genericAdd.Invoke( handler, new object[]{ slot } );
    9. }
    10.  
    11.  
    Hope it helps, but be warned: won't work on mobile, and not sure how Unity 5's IL2CPP will handle that kind of .NET wizardry...

    Glad you're enjoying the package, lot's in there to play with!

    Cheers,

    Gregzo
     
  25. iddqd

    iddqd

    Joined:
    Apr 14, 2012
    Posts:
    501
    Hey, many thanks for the help. Too bad it won't work on mobile - i guess i'll have to solve it differently then :-(
     
  26. Marionette

    Marionette

    Joined:
    Feb 3, 2013
    Posts:
    349
    hey Greg? i'm currently working on granulation that uses portaudio for buffer callbacks/io etc. and it's working perfectly, but i'm encountering huge synchronization issues when trying to get something similar working in unity. will your asset allow me to process a buffer in a callback etc similar to portaudio while relieving me of the sync issues? would there be a possibility of a trial that i could test? i'd only need a trial for a day or 2.

    tia,

    -Marionette
     
  27. gregzo

    gregzo

    Joined:
    Dec 17, 2011
    Posts:
    795
    Hi Marionette,

    Granular synthesis is a field where G-Audio's engine really shines: you can cache your grains for best performance, and memory is automatically recycled to avoid feeding the garbage collector. You can literally play with an envelope in edit mode, synthesizing a granular sound, and as you expand / change the envelope, all memory management is handled for you. Bear in mind G-Audio doesn't feature a granular synthesis engine, but it's pulse and envelope classes are sturdy enough to push into granular synth territory ( see the drone sound in the Break Demo on our website, made with these classes and a single piano sample ).

    About processing buffers in a callback:
    You sure can in G-Audio, although it's more involved stuff ( every time you ask the engine to play a sound, you can specify a mixing callback ).
    What in Unity's OnAudioFilterRead callback isn't working out for you?

    Cheers,

    Gregzo

    P.S.: trials are on a case by case basis, mainly for students who already have projects under way. PM me for more info.
     
  28. Marionette

    Marionette

    Joined:
    Feb 3, 2013
    Posts:
    349
    Thanks Greg, but I resolved my issue right after I posted this. Seems I had something silently failing, a bad cast, causing things to misalign.
     
  29. ilesonen

    ilesonen

    Joined:
    Sep 11, 2012
    Posts:
    49
    Hi Gregzo,
    I'm interested in of this plugin, especially the ios "time stretch" add on. So, it means it can be use to change tempo of any audio file without changing the pitch? You don't have any video about demonstrating time stretching? Also, does it and G-audio work with Master Audio?
    Thanks!
     
  30. gregzo

    gregzo

    Joined:
    Dec 17, 2011
    Posts:
    795
    Hi,

    I don't have a video, sorry! It is true time stretch( .5x to 2x ). you can stream user files( aac, wav, mp3 ) or your own streaming assets, or even cached audio. Dirac is a famous time stretching/pitch shifting library that performs really well - check my free asset iOS pitch shifting to sample it, it can pitch shift any Unity audiosource.

    One caveat: if you don't have Dirac Pro, only mono audio can be processed. G-Audio works around this limitation by de-interleaving and processing channels seperately, which is a bit nore performance hungry and doesn't phase lock channels.

    About Master Audio: they are very different assets and shouldn't interfere.

    Cheers,

    Gregzo
     
  31. ilesonen

    ilesonen

    Joined:
    Sep 11, 2012
    Posts:
    49
    Thank you! Is there any examples of usage in ios strech package? I'd like to have feature in my app where player could use a slider to change tempo of music.
    I'll check out the free pitch shift package. Thanks!
     
  32. gregzo

    gregzo

    Joined:
    Dec 17, 2011
    Posts:
    795
    Hi,

    Yes, there's a simple example included, with both time and pitch sliders.

    Let me know if you need help, here or on G-Audio's forums.

    Cheers,

    Gregzo
     
  33. kewlking

    kewlking

    Joined:
    Dec 2, 2014
    Posts:
    6
    Hi Gregzo,

    Just a quick ping to see if the current build of G-Audio has the music oriented classes and components that you mentioned were going to be rolled into v1.3 - namely: GATInstrumentBank works with midi codes, Pitch detection for sound banks and ScalePattern classes. Very excited to purchase and try it out...

    Thanks!
     
  34. gregzo

    gregzo

    Joined:
    Dec 17, 2011
    Posts:
    795
    Hi kewiking,

    There wasn't much demand and I haven't found time to fully cleanup what I've written already.

    What is there already:
    -pitch detection for sample banks( works quite well depending on the samples used, still betaish though )
    -custom code I've posted in the G-Audio forums to get the closest available sample from an input midi code as well as the pitch factor to apply. Think SampleBank.GetClosestSample( float midiCode, out float pitchFactor ).

    What is not yet released but already fully functional:
    -scala file parser
    -scale classes

    What is functional but not stable enough:
    -midi parser
    -all sorts of complicated pitch / rhythm source systems that can be plugged in various players and banks, well, the basis of a procedural music system that I'm not sure I'll finish ( perhaps my approach was too low level, flexible but too complex ).

    Feel free to ask if you have more questions.

    Cheers,

    Gregzo
     
  35. kewlking

    kewlking

    Joined:
    Dec 2, 2014
    Posts:
    6
    Thanks Gregzo.
     
  36. kewlking

    kewlking

    Joined:
    Dec 2, 2014
    Posts:
    6
    Thanks Gregzo. So, correct me if I am wrong in my understanding - but could I then do the reverse - where I take an input from the microphone and then, match that to the closest midi code with the associated pitch factor?
     
  37. iddqd

    iddqd

    Joined:
    Apr 14, 2012
    Posts:
    501
    Hi Gregzo

    I would like to build a universal Windows 8.1 app, but when i compile, i get several errors regarding Threading, BackgroundWorker, DoWorkEventArgs, RunWorkerCompletedEventArgs

    Would it be possible to update your Asset to support this platform?

    Thanks
     
  38. gregzo

    gregzo

    Joined:
    Dec 17, 2011
    Posts:
    795
    Hi,

    Do you use G-Audio for reading or writing audio files from/to disk? That's where the threading stuff is.

    If you don't, I can quickly fix it and send you the next update early, by pm.

    Let me know,

    Gregzo
     
  39. iddqd

    iddqd

    Joined:
    Apr 14, 2012
    Posts:
    501
    Hey gregzo

    Thanks for your quick response.
    Well I'm just playing multiple GAT Resampling Sample Banks. Does that classify as reading from disk?
    I'm not writing to disk.

    Thanks.
     
  40. gregzo

    gregzo

    Joined:
    Dec 17, 2011
    Posts:
    795
    Nope, only loading user files directly( not through an AudioClip ) uses System.Threading classes.

    I'll have a patch for you tonight. thanks for your patience,

    Gregzo
     
  41. iddqd

    iddqd

    Joined:
    Apr 14, 2012
    Posts:
    501
    Wow that would be great, many thanks for the great support!
     
  42. gregzo

    gregzo

    Joined:
    Dec 17, 2011
    Posts:
    795
    Just sent you a pm. Hopefully it should be ok now!

    Cheers,

    Gregzo
     
  43. SelfishGenome

    SelfishGenome

    Joined:
    Jun 27, 2014
    Posts:
    15
    @gregzo does G audio provide any compression options on mobile devices?
     
  44. gregzo

    gregzo

    Joined:
    Dec 17, 2011
    Posts:
    795
    @SelfishGenome

    No, G-Audio piggy-backs Unity's compression options.
    It's also not the best tool for managing long tracks, as most of it's advanced features require the audio data to be fully decompressed and stored as floats.

    What is it you're trying to do?
     
  45. SelfishGenome

    SelfishGenome

    Joined:
    Jun 27, 2014
    Posts:
    15
    @gregzo Hi, sorry I tried to pm you but regardless of how I worded it, the message was flagged as spam with no feedback as to why.

    I have set up an audio recorder for voice recording potentially long clips and therefore when the clip is saved to the device it would be ideal to have the file converted from wav to mp3 or some other form of compression.
     
  46. gregzo

    gregzo

    Joined:
    Dec 17, 2011
    Posts:
    795
    For compressing audio, you could use bass library which should be cross platform.

    If you are targetting iOS, Extended Audio File Services is the API that will get things done without digging too deep in core audio. On Android, I've no idea.

    Quick reply from my phone, sorry for the lack of links. Google both.
     
  47. SelfishGenome

    SelfishGenome

    Joined:
    Jun 27, 2014
    Posts:
    15
    Amazing, thanks for the help, I will give them a look.
     
  48. iddqd

    iddqd

    Joined:
    Apr 14, 2012
    Posts:
    501
    If i play multiple sounds at the same time, distortion can occur. Does g-audio have something in to prevent this or to caluclate the required volume attenuation for the sounds, so they won't distort?

    And on the other hand i'm wondering it this would also happen if we used the new unity audio channels/mixer instead of the GAT Player Channels? Would it be a big task to re-route this in the script so individual sounds can play to the unity mixer channels directly?

    Thanks
     
  49. Tony-Lovell

    Tony-Lovell

    Joined:
    Jul 14, 2014
    Posts:
    127
    I may be mis-reading this, but why does your Microphone code restrict the Mic sampling rate to the same one as the system's audio output format?
     
  50. iddqd

    iddqd

    Joined:
    Apr 14, 2012
    Posts:
    501
    g-audio is now open source by the way.