Search Unity

Native Audio - Lower audio latency via OS's native audio library (iOS/Android)

Discussion in 'Assets and Asset Store' started by 5argon, Apr 15, 2018.

  1. 5argon

    5argon

    Joined:
    Jun 10, 2013
    Posts:
    1,555
    ogimage.png

    Native Audio
    Lower audio latency via direct unmixed audio stream at native side

    Requirement : Unity 2019.3 LTS+ For iOS and Android.
    Asset Store Link : https://u3d.as/12wU
    Release Note : in the website https://exceed7.com/native-audio/CHANGELOG.html



    So your Unity game outputs WAY slower audio than other apps even on the same device? Turns out, Unity adds as much as 79% of the total latency you hear. But not to worry because Native Audio will take care all of it. I have researched into the cause for so long and the solution is right here.


    What can we skip by going directly to the native API?
    Unity has an internal mixing system, backed by FMOD. You can go crazy with a method like
    AudioSource.PlayOneShot
    and the sound magically overlaps over itself,even though at native side Unity only asked Android for only 1 "Audio Track". How can that be? Because Unity spend time mixing them together to 1 bus. Plus you have all the wonders of Unity 5.0.0 introduced audio mixer system. Effects, sends, mixers, etc. all stacked into that.

    A great design for a game engine. But we certain subset of game developers absolutely do not want any of the "spend time" if possible. Unfortunately Unity does not give us a choice to bypass and just go native.

    For some genre of apps and games that needs critical timing for audio (basically any response sound from input, rhythm game, etc.) this is not good. The idea to fix this is to directly call into the native methods and have it read the raw audio file we prepared without Unity's importer. Bypassing Unity's audio path entirely.

    I have researched for long time into the fastest native way of each respective platform. For iOS it is to use OpenAL (Objective-C/Swift) and for Android it is to use OpenSL ES (NDK/C). For more information about other alternatives why they are not good enough, please go to implementation page.

    But having to interface with multiple different set of libraries separately from Unity is a pain, so Native Audio is here to help...

    "Native Audio is a plugin that helps you easily loads and plays an audio using each platform's fastest native method, from the same common interface in Unity."

    Android High-Performance Audio Ready
    It improves latency for iOS greatly, but I guess many came here to fix the already-horrible-without-Unity Android latency.

    I am proud to present that Native Audio is following all of Google's official best practices required to achieve High-Performance Audio in Android. I have additionally fixed all the latency-related mistakes that Unity had to unfortunately chosen for their "versatile" audio engine.
    • Uses C/NDK level OpenSL ES and not Java
      MediaPlayer
      ,
      SoundPool
      , or
      AudioTrack
      . Plus, most latency-critical interfacing methods from Unity are by
      extern
      to C and not
      AndroidJavaClass
      to Java. Feature set of OpenSL ES that would add latency has been deliberately removed.
    • Ensuring "Fast Track" audio being instantiated at hardware level, not a normal one. Native Audio does not have any kind of application level mixer and each sound goes straight to this fast track. Currently Unity does not get the fast track due to sample rate mismatch, moreover somehow using a deep buffer thread, designed for power saving but with the worst latency.
    • Built-in resampler. Resample your audio on the fly to match "device native sample rate". Each phone has its own preferred sample rate. (Required for fast track)
    • Minimum jitter by zero-padded audio, so that the length is exactly a multiple of "device native buffer size" to ensure consistent scheduling. Each phone has its own preferred buffer size.
    • Double buffering so your audio playing start as soon as possible, unlike a lazy single buffering which we must push entirely of the audio into the playing buffer. (This is not the same step as loading the audio, we must go through this step on every play.) Combined with the previous point the workload of each callback is deterministic.
    • A support for AAudio, the new and even faster standard from Google is coming in the future. Players that has Oreo (8.0) or higher will automatically get AAudio implementation with no modification from your code.
    • Automatically receives better audio latency from future system performance improvements.
    Of course, with publicly available thorough research and confirmations. This means it can perform even better than native Android app that was coded naively/lazily in regarding to audio. Even pure Android developers might not want to go out of Java to C/NDK.

    How faster?
    Here are some benchmarks. While this is not a standard latency measurement approach like loopback cable method and the number alone is not comparable between devices, but time difference of the same device truly show how much the latency decreased without doubt.

    Screenshot 2018-09-10 18.54.34.png

    The website contains much more to read : https://exceed7.com/native-audio

    Please do not hesitate to ask anything here or join the Discord channel. Thank you.
     
    Last edited: Dec 31, 2021
    deus0 and a-t-hellboy like this.
  2. liuxuan

    liuxuan

    Joined:
    Oct 13, 2014
    Posts:
    7
    I sincerely hope that your plugin could support unity5.6.
     
  3. 5argon

    5argon

    Joined:
    Jun 10, 2013
    Posts:
    1,555
    Hello @liuxuan, in fact it should work with every Unity version out there (down to 5.0.0 or even below) as long as it supports basic iOS + Android native plugin. Being only an interface to native implementation the feature depends on device's OS rather than Unity.

    (And in this sense only Android Jelly Bean or over is supported because of some special fast path constructor of AudioTrack, for iOS I am not sure how long OpenAL has been around)

    But the current newest version (2018.2.0b11) of Unity can go back only as far as 2017.1.3f1, more than that the project format mismatch and require Reimport All. And I just don't want to have a commitment to make sure it works down to a very old version that is difficult to go to just to find bugs for my users. (Also Unity Hub installs starts at 2017.1.4f1, so my commitment of support is actually a little more than required)

    You could try in 5.6 and I would not be so surprise if it works perfectly. But if it does not work it would be difficult for me to find out what went wrong for you.
     
  4. 5argon

    5argon

    Joined:
    Jun 10, 2013
    Posts:
    1,555
    By the way BIG NEWS about the next update :

    • Android part is undergoing a big migration to OpenSL ES instead of AudioTrack. Unlike AudioTrack, (built on top of OpenSL ES with similar latency from my test) it is one of the officially mentioned "high performance audio" way of playing audio here https://developer.android.com/ndk/guides/audio/ It will be awesome. And being in C language part Unity can invoke method via extern as opposed to via AndroidJavaClass like what we currently have. (speed!)

    • Additionally I will go as far as resampling the audio file on the fly (we don't know which device the player will use, but we can only prepare 1 sampling rate of audio practically) to match each device differing native sampling rate (either 44100Hz or 48000Hz) so that the special "fast path" audio is enabled. Would be awesome for any music games out there. (But it adds some load time if a resampling is required, it is the price to pay)

      About resampling quality do not worry, as instead of writing my own which would be super slow and sounds bad I will incorporate the impressive libsamplerate (Secret Rabbit Code) http://www.mega-nerd.com/SRC/ and it has a very permissive BSD license that just require you to put some attributions, not to open source your game or anything.

    • Even more I will intentionally zero pad the audio buffer so that it is a multiple of "native buffer size" of each device further reducing jitter when pushing data to the output stream.

    And now that I got used to C programming with Android this will pave way for AAudio, new Google-developed native android audio library accessible in C language like OpenSL ES but it has even more potential to write data directly to the area very close to audio unit.

    It is only usable for Android Oreo (8.0, better on 8.1) or higher so I will make it that the player get AAudio if they are on Oreo and OpenSL ES if otherwise. (Read more : https://source.android.com/devices/audio/aaudio , https://developer.android.com/ndk/guides/audio/aaudio/aaudio)

    This development actually happen because of a user request in the Discord channel https://discord.gg/8gthuWA and it is great for my own game too so I decided to start working on it. You can follow the discussion to deliver these features there or tell me any feature you would like to have and we will see about possibility.
     
  5. a-t-hellboy

    a-t-hellboy

    Joined:
    Dec 6, 2013
    Posts:
    180
    Hey there,

    I'm working on rhythmic game which I want to sync gameplay (player movement) with music. Every beat or multiple beats the ball hits the platform. I use dspTime for syncing. It works perfectly on Windows but Android is a disaster. Ball doesn't hit the platform at the correct moment and also because of dspTime latency the ball bouncing is very bad, it's not smooth. Now I want to know using this native plugin is useful for my game ? How it can solve dspTime latency and latency in playing the music at the first time.
     
  6. 5argon

    5argon

    Joined:
    Jun 10, 2013
    Posts:
    1,555
    Hello. (I am also making a rhythm game and this plugin is actually a core part of the game)

    I do not recommend this plugin for music since the the requirement is that native side could not decompress and the file must be in .wav. That means 4 minutes of music could add 20MB to your game. A good plan is to use normal Unity audio for compressed track like BGM (which you will then have to use various tricks to make it "stick" with any of Unity's time value as possible so you can reference, will comeback to this later) and use Native Audio only for response sound which is not large.

    For example a drumming game where you drum to the compressed BGM. You could get fastest response using Native Audio with the drum sound while not costing much space. Native Audio is for solving latency problem in playing short response sound which you could not predict it will play or not and you want it to play the fastest possible if it should play. The meaning of that is for example, the drumming game will not play the drum sound if you did not hit the screen. Games with coins to collect will not play coin collection sound if the player missed them. But if there is only backing track in your game (which it play for sure) then the problem is solvable without Native Audio but with fixed offset/calibration.

    I will take this opportunity to write about basics of music synchronization in rhythm games / music game.

    The backing track problem

    This is no longer specific to Native Audio but Unity in general. In rhythm game first you have to get the backing track to line up with the first "note" (what is that depending on your game) and the rest will stay correct UNLESS the game lags or the audio lags. 90% of the time the game lags and audio went ahead of the game since audio is not in the same processing unit with the game anymore after the play command. The lag requires separated resolution and I will not talk about it right now.

    Audio in Unity is "fire and forget". When you ask AudioSource to play it will take variable amount of time and play whenever it feels like. This can be immediately in the same frame, a bit later but still in the same frame, or in the next frame. You get the idea. It is not frame dependent anymore from the moment you call AudioSource.Play. And each device especially Android has different audio latency.

    We cannot easily calculate latency in-game, so the backing track problem on Android is usually solved by having the user calibrate by themselve since Android has different audio latency by device. If there is only backing track and no response sound then I think this is the best way to do it.

    After the user get the correct offset for the device, then it is your job to make the offset stays true, stays the same every start of the play. Some rhythm game has problem even with manual calibration because each restart of the game the offset is different. This is programmer's mistake and could make the user frustrated that he get the calibration right or not.

    Starting the music precisely

    As mentioned in prevous section after your player have solved the device specific latency for you, it is now your job to make that value right every time. (every restart, score attacker players will "retry" a lot)

    1. Preload the audio accordingly.
    2. Immediate playing is not possible, the only solution is to use AudioSource.PlayScheduled which can specify a precise point of time in the future. This method use dspTime, and be mindful of where you ask the `Time.dspTime` since this value has possibility to change (or not) even in-between lines of code. The only thing to ensure is this time should be large enough for it to "prepare".
    3. According to the time in the future that you use here, start you game's event as close to that time as possible. It is impossible to make a frame in the future lands exactly to that time, so maybe the best to just use the first frame which comes after that time. (This is similar to WaitForSeconds in StartCoroutine, it does not actually wait for that exact second but might be more depending on where the frame lands)

    How to execute code the earliest in the frame

    With the nature of `dspTime` and `realTimeSinceStartup` that change its value every time even in consecutive line of code, It might be desirable to nab the value at the same point of code in every frame as possible, remember it and use it with codes that comes later in the frame.

    In Unity this is a bit troublesome since `Update` runs a bit late even with Script Execution Order moved to topmost. The earliest step is "Initialization" step. But to get your code to this step is currently not easy.

    1. With the new experimental API you can add custom code to that step. See http://beardphantom.com/ghost-stories/unity-2018-and-playerloop/
    2. With Unity's new ECS/Entities package (get it from Unity Package Manager) you can create a system with [UpdateBefore(typeof(Initialization))] to position its OnUpdate as earliest possible.

    The response sound problem

    With backing track correct, the only problem left is if your game has any kind of response sound. Response sound cannot be calibrated/compensated so the best is to rely on a way to play the sound with shortest possible latency possible.

    This is finally what Native Audio try to solve. Use it and get the most immediate playing as possible. Notice that immediate playing will always still be less accurate that the correct calibration, but calibration cannot work with response sound since you have to move the sound earlier in time. (Unless you are a psychic and can predict that the player will surely hit the screen and activates the response sound)

    Bonus : Synching with dspTime problem

    For most rhythm game getting the backing track accurate is enough, but what if you want to know where the audio is right now.

    You want the real current audio time as real time as Time.realTimeSinceStartup. That API change its value even in 2 consecutive lines of code indicating that it is very realtime.

    Screenshot_20180705-184855.png

    From my research AudioSettings.dspTime and audioSource.time updates in a separated discrete step . In the same frame if you ask the value it may or may not change depending if that update step happpen in between the line of code or not. But in 2 consecutive line of code it is very likely that it will stay the same unlike Time.realTimeSinceStartup

    And now comes to the native time. In version 2.0 you can ask dspTime of audio played by Native Audio. Unfortunately I found that both Android and iOS reports a time that is also update in discrete step like `dspTime`. It seems that all audio engine are like this and nothing is truly real time.

    There are differences though :

    Android - the step is independent from Unity's dsp step (AudioSettings.dspTime and audioSource.time) if those two change in between lines of code, Android time may not change. If Android time change, those two not necessary have to change.

    iOS - The time from OpenAL is surprisingly in the same lock step as AudioSettings.dspTime and audioSource.time. Indicating that they internally use the same system. If one of them stay still the rest also stay still.

    On iOS you see that yellow and blue overlap often. The time from native often stay closer or even the same as current dspTime than asking from Unity's audio source. It might be that because there is less latency the audio start sooner and only red line is delayed.

    Anyways this is a new GetPlaybackTime method and it will be in the next release whether it will be useful or not.
     
    Last edited: Aug 20, 2018
    a-t-hellboy likes this.
  7. 5argon

    5argon

    Joined:
    Jun 10, 2013
    Posts:
    1,555
    Hello everyone. I guess I have finally weed out all the performance bug and other bugs of Native Audio already, so from now on it is entering the benchmarking phase! Soon I will be able to release for real in the store.

    As a teaser this is what latency reduction we can expect from Native Audio 2.0



    (Yes.. Unity's AudioSource is THAT slow. I too did not feel like that until I have Native Audio to compare with.)

    Screenshot 2018-09-02 15.38.18.png

    From 323ms to 79ms That means we have -244ms latency reduction (-75.54%) !!

    And this phone is not old, the Mi A2 just came out. What you hear is the best latency you can get from default Unity AudioSource (+Best Latency settings already selected on Audio Settings panel)

    Here's a list of devices I own and I will benchmark them throughly. The benchmark data will be publicly available including the data before averaging and the recorded sound files.

    Screenshot 2018-09-02 15.50.44.png
     
  8. 5argon

    5argon

    Joined:
    Jun 10, 2013
    Posts:
    1,555
    The version 2.0's trailer is now online. Just to show you how much latency Unity can add over your game in Android.
    It works on iOS too by the way!

     
  9. 5argon

    5argon

    Joined:
    Jun 10, 2013
    Posts:
    1,555
    Version 2.0.0 has been released in the store today! The store text and infographics has been updated throughly too. Check it out : https://assetstore.unity.com/packages/tools/audio/native-audio-107067

    Moreover some more benchmarks has been added to the website :

    Screenshot 2018-09-08 12.03.11.png

    I am trying to get my hand on more popular devices and add them soon. (That is, the Samsung Galaxy anything.)
     
  10. 5argon

    5argon

    Joined:
    Jun 10, 2013
    Posts:
    1,555
    Version 2.1.0 is underway :

    [IOS] 2D PANNING The backend OpenAL of iOS is a 3D positional audio engine. 2D panning is emulated by deinterleaving a stereo source audio into 2 mono sources, then adjust the distance from the listener so that it sounds like 2D panning.

    [ALL PLATFORMS] PLAY ADJUSTMENT There is a new member playAdjustment in PlayOptions that you can use on each nativeAudioPointer.Play(). You can adjust volume and pan right away BEFORE play. This is because I discovered on iOS it is too late to adjust volume immediately after play with NativeAudioController without hearing the full volume briefly.

    Trade offs : - Now iOS can play only 16 concurrent sounds instead of 32 because one stereo file now take up 2 sources for each ear.

    It is impossible to adjust only one channel of a stereo file in iOS to achieve "balancing" effect (not "panning", but the method says "pan" anyways)

    BUG FIXES
    Previously the Android panning that is supposed to work had no effect. Now it works alongside with the new iOS 2D panning. (I am sorry)

    Demo APK
    Plus I have added a demo APK to the http://exceed7.com/native-audio/index.html around the release note. I am not sure how can I make a demo for iOS, maybe based on manual TestFlight invite or upload the entire Xcode project (200MB big and unwieldly)

    ---

    Benchmark update
    Thanks to my friend, I am able to get the benchmark of some of the very popular Samsung Galaxy S phones.

    Screenshot 2018-09-10 18.55.01.png

    S9+, the current Samsung flagship is currently holding the record of the best pure Unity audio latency. No other devices tested has been able to get a sub 100ms.

    This might means, Unity's added latency is highly CPU-bound. When using Native Audio, device's CPU matter less for playing audio. (As several very old phones has the time of almost equal to the S9+)
     
  11. Kiupe

    Kiupe

    Joined:
    Feb 1, 2013
    Posts:
    528
    Hello,

    The last version really improved Android audio, thanks !

    Will you consider adding new native audio methods like :

    - Pause / Resume <-- this one is a must-have !
    - Fade in / out
    - Play( float startPosition)

    Thanks
     
  12. 5argon

    5argon

    Joined:
    Jun 10, 2013
    Posts:
    1,555
    Hello,

    Pause / Resume, Play( float startPosition)

    These 2 are surprisingly closely related. For pause, resume's design I have looked at SoundPool's API design. In summary both SoundPool and Native Audio governs and manipulate native AudioTrack in some way :
    https://developer.android.com/reference/android/media/SoundPool

    - Play use audio ID as an argument, by some algorithm (instantiate a new track, if not possible then overwrite the oldest track) a track is selected for that audio. It returns a track ID.
    - Pause/Resume/etc. requires a track ID you must keep and not audio ID.

    This means this sequence of unexpected behaviour is possible :
    - Playing sound A, for a while you call Pause on the returned track ID.
    - You play sound B C D E ... so many times that all the track was recycled for all other sounds.
    - When you use the stored track ID to call Resume, the sound will no longer be A. In fact the "pause" state was already gone by the moment the track with sound A got overwritten. Resume will do nothing.

    I believe this is what SoundPool do. However what if we fix it like this :
    - On pausing returns a completely unique "Pause ID", it contains an audio ID and a time which it was paused.
    - On resume we start a completely new play (track selection algorithm will run again) but not from the beginning.

    The pause ID might add some complexity to the API, requiring a new class. (Because it cannot be the same as `NativeAudioController` since that is a representation of an AudioTrack)

    Also this requires "Play() from any point in audio" feature. I am intended to add this function for sure.

    Supposed that we now can play from any point. Currently there is a function `GetPlaybackTime` on `NativeAudioController`. If we use that to ask the time, Stop it with that controller, then start a new play with that time, it would equal to my redesign of Pause() and Resume(). Then it is possible to emulate Pause and Resume with only "play from any point" function. (Makes the API cleaner? But at the same times feel like a hack around pause and resume)

    So I would like to ask if you have to do the time keeping + start from time instead of Pause/Resume would that be comfortable? (Only Play( float startPosition) would be implemented) Because SoundPool also can't do "intuitive pause" like Unity's AudioSource.

    Fade In/Out
    For fade in/out, currently you can DIY with SetVolume repeatedly on the returned `NativeAudioController` after play. In my opinion I don't want to put time-dependent helper method on the API.
     
  13. Kiupe

    Kiupe

    Joined:
    Feb 1, 2013
    Posts:
    528
    Hi,

    Pause/resume would be more convenient but the feature is so important that we could deal with time keeping + start from time :)
     
  14. corbinyo

    corbinyo

    Joined:
    Aug 23, 2017
    Posts:
    26
    Will this cut down on microphone input latency?
     
  15. 5argon

    5argon

    Joined:
    Jun 10, 2013
    Posts:
    1,555
    Unfortunately no, Native Audio currently worked only on output. : (
    OpenSL ES is definitely able to do input, but it is not implemented currently.

    It is unlikely to be implemented soon as well, since there are still a lot of output functions missing waiting to be implemented. (Pause/Resume/Loop/compressed ogg support/relaxing file format restrictions) And personally my game does not use audio input.

    Some articles that mention native input :
    https://developer.android.com/ndk/guides/audio/audio-latency#input-latency
    https://developer.android.com/ndk/guides/audio/opensl/opensl-prog-notes#perform
     
  16. Kiupe

    Kiupe

    Joined:
    Feb 1, 2013
    Posts:
    528
    @5argon so pause/resume feature using time keeping + start from could be plan ? Can we expect that feature in a near future ? :)

    Thanks
     
  17. 5argon

    5argon

    Joined:
    Jun 10, 2013
    Posts:
    1,555
    I decided to take a shot on it today, will come back to you later. I expect it to be done in a week.

    Additionally you can PM me an invoice number, then you can have the version with that function early before I submit for Asset Store approval.
     
  18. 5argon

    5argon

    Joined:
    Jun 10, 2013
    Posts:
    1,555
    Ok, I got it earlier than I thought. Android is a pain but fortunately on iOS it's quite easy with OpenAL.
    I would like to take this opportunity for the next release note of Native Audio v2.2.0 :

    [ALL PLATFORMS] TRACK'S PLAYHEAD MANIPULATION METHODS ADDED
    • NativeAudio.Play(playOptions)
      : Able to specify play offset in seconds via
      playAdjustment
      in the
      PlayOptions
      argument.
    • NativeAudioController
      : Added track-based pause, resume, get playback time, and set playback time even while the track is playing. Ways to pause and resume include using this track-based pause/resume, or use get playback time and store it for a new
      Play(playOptions)
      later and at the same time
      Stop()
      it immediately, if you fear that the track's audio content might be overwritten before you can resume.
    • NativeAudioPointer
      : Added
      Length
      property. It contains a cached audio's length in seconds calculated after loading.
    This video demonstrates something only possible with these features. I have to take some time checking everything before submitting.



    @Kiupe a beta of this version will be messaged to you soon.
     
  19. Kiupe

    Kiupe

    Joined:
    Feb 1, 2013
    Posts:
    528
    Thanks !!!
     
  20. 5argon

    5argon

    Joined:
    Jun 10, 2013
    Posts:
    1,555
    Hello. A new experimental feature Native Audio Generator is currently in the work. It is initially for my game, I am still deciding whether or not it is good enough to be in the release.

    Screenshot 2018-10-06 13.11.37.png

    Some of you might have already made a kind of wrapper over Native Audio to map to your sounds or something similar, this feature will help you achieve that faster, you don't even have to type a code.

    By script generation it will create a C# file hard-coded all of your audio in the selectable subfolder of
    StreamingAssets
    folder (supports one more level of subfolder as a "group", as pictured) This is called
    NativeAudioLibrary
    .

    ...along with an asset file (
    ScriptableObject
    ) to remember other settings for each sound which you can modify. This file has to be in Resoures since a static variable will get it.

    For instance all the string paths are stored in there so no need to type in the code. (You don't want to edit this) Also we can store
    Volume
    value which will be multiplied automatically to the volume you can already use when playing. This way we can use that asset file as an individual sound mixer of sorts. You can then modify the code to include other things to each one as you like. Each one of these is called
    NativeAudioObject
    . (Responsible for storing a loaded
    NativeAudioPointer
    inside)

    new.gif

    Using the generated script looks like this.
    • Play()
      immediately on it is safe, it will load and keep the pointer automatically if not already.
    • In the generated script group operations are available, such as load/unload every sounds in the group. Also you can still load individual sound.
     
  21. 5argon

    5argon

    Joined:
    Jun 10, 2013
    Posts:
    1,555
    Thanks to a request from my user, in the next version everyone will get the looping functionality. Plus iOS can now specify a track index like Android.

    The next version will not be 2.2.0 but 3.0.0 since it will introduce some breaking changes.
    This is the current tentative feature list :

    ## [All Platforms] Track's playhead manipulation methods added

    - `NativeAudio.Play(playOptions)` : Able to specify play offset in seconds via the `PlayOptions` argument.
    - `NativeAudioController` : Added track-based pause, resume, get playback time, and set playback time even while the track is playing. Ways to pause and resume include using this track-based pause/resume, or use get playback time and store it for a new `Play(playOptions)` later and at the same time `Stop()` it immediately, if you fear that the track's audio content might be overwritten before you can resume.
    - `NativeAudioPointer` : Added `Length` property. It contains a cached audio's length in seconds calculated after loading.

    ## [All Platforms] Track Looping

    A new `PlayOption` applies a looping state on the TRACK. It means that if some newer sound decided to use that track to play, that looping sound is immediately stopped.

    To protect the looping sound, you likely have to plan your track number usage manually with `PlayOption.audioPlayerIndex`.

    - If you pause a looping track, it will resume in a looping state.
    - `nativeAudioController.GetPlaybackTime()` on a looping track will returns a playback time that resets every loop, not an accumulated playback time over multiple loops.

    ## [iOS] Specify a track index

    Previously only Android can do it. Now you can specify index 0 ~ 15 on iOS to use precisely which track for your audio. It is especially important for the new looping function.

    ## [EXPERIMENTAL] Native Audio Generator

    When you have tons of sound in `StreamingAssets` folder it is getting difficult to manage string paths to load them.

    The "Native Audio Generator" will use a script generation to create a static access point like this : `NativeAudioLibrary.Action.Attack`, this is of a new type `NativeAudioObject` which manages the usual `NativeAudioPointer` inside. You can call `.Play()` on it directly among other things. You even get a neat per-sound mixer in your `Resources` folder which will be applied to the `.Play()` via `NativeAudioObject` automatically.

    Use `Assets > Native Audio > Generate or update NativeAudioLibrary` menu, then you can point the pop-up dialog to any folder inside your `StreamingAssets` folder. It must contain one more layer of folder as a group name before finally arriving at the audio files. Try this on the `StreamingAssets` folder example that comes with the package.

    This is still not documented anywhere in the website yet, but I think it is quite ready for use now.

    ## [Breaking Change] `PlayAdjustment` inside the `PlayOptions` is no more.

    Having 2 layers of configuration is not a good API design, but initially I did that because we need a struct for interop and we need a class for its default value ability.

    I decided to make it 1 layer. The entire `PlayOptions` is now used to interop with the native side.

    Everything is moved to the `PlayOptions`, and also `PlayOptions` is now a struct. Previously the `PlayAdjustment` inside is the struct.

    Not a class anymore, now to get the default `PlayOptions` you have to use `PlayOptions.defaultOptions` then you can modify things from there. If you use `new PlayOptions()` the default value of the struct is not a good one. (For example volume's default is supposed to be 1, not int-default 0)
     
    Last edited: Oct 18, 2018
  22. Kiupe

    Kiupe

    Joined:
    Feb 1, 2013
    Posts:
    528
    All those new features really sound great !

    Thanks.
     
  23. 5argon

    5argon

    Joined:
    Jun 10, 2013
    Posts:
    1,555
    Native Audio 3.0.0 has been released with the following features.
    (Or you could read here : http://exceed7.com/native-audio/changelog.html)

    ## Added

    ### [All Platforms] Track's playhead manipulation methods added

    - `NativeAudio.Play(playOptions)` : Able to specify play offset in seconds in the `PlayOptions` argument.
    - `NativeAudioController` : Added track-based pause, resume, get playback time, and set playback time even while the track is playing. Ways to pause and resume include using this track-based pause/resume, or use get playback time and store it for a new `Play(playOptions)` later and at the same time `Stop()` it immediately, if you fear that the track's audio content might be overwritten before you can resume.
    - `NativeAudioPointer` : Added `Length` property. It contains a cached audio's length in seconds calculated after loading.

    ### [All Platforms] Track Looping

    A new `PlayOptions` applies a looping state on the TRACK. It means that if some newer sound decided to use that track to play, that looping sound is immediately stopped.

    To protect the looping sound, you likely have to plan your track number usage manually with `PlayOptions.audioPlayerIndex`.

    - If you pause a looping track, it will resume in a looping state.
    - `nativeAudioController.GetPlaybackTime()` on a looping track will returns a playback time that resets every loop, not an accumulated playback time over multiple loops.

    ### [iOS] Specifying a track index

    Previously only Android can do it. Now you can specify index 0 ~ 15 on iOS to use precisely which track for your audio. It is especially important for the new looping function.

    ### [EXPERIMENTAL] Native Audio Generator

    When you have tons of sound in `StreamingAssets` folder it is getting difficult to manage string paths to load them.

    The "Native Audio Generator" will use a script generation to create a static access point like this : `NativeAudioLibrary.Action.Attack`, this is of a new type `NativeAudioObject` which manages the usual `NativeAudioPointer` inside. You can call `.Play()` on it directly among other things. You even get a neat per-sound mixer in your `Resources` folder which will be applied to the `.Play()` via `NativeAudioObject` automatically.

    Use `Assets > Native Audio > Generate or update NativeAudioLibrary` menu, then you can point the pop-up dialog to any folder inside your `StreamingAssets` folder. It must contain one more layer of folder as a group name before finally arriving at the audio files. Try this on the `StreamingAssets` folder example that comes with the package.

    This is still not documented anywhere in the website yet, but I think it is quite ready for use now. EXPERIMENTAL means it might be removed in the future if I found it is not good enough.

    ## Removed

    ### `PlayAdjustment` inside the `PlayOptions` is no more.

    Having 2 layers of configuration is not a good API design, but initially I did that because we need a struct for interop and we need a class for its default value ability.

    I decided to make it 1 layer. The entire `PlayOptions` is now used to interop with the native side.

    Everything is moved to the `PlayOptions`, and also `PlayOptions` is now a struct. Previously the `PlayAdjustment` inside is the struct. Not a class anymore, now to get the default `PlayOptions` you have to use `PlayOptions.defaultOptions` then you can modify things from there. If you use `new PlayOptions()` the default value of the struct is not a good one. (For example volume's default is supposed to be 1, not int-default 0)

    ---

    Stay tuned for the next version with huge quality of life upgrade. I aim to relax the load format greatly and integrates more closely with Unity.
     
  24. 5argon

    5argon

    Joined:
    Jun 10, 2013
    Posts:
    1,555
    Spicy news!

    Today Unity 2019.1.0a7 appears in Unity Hub. In the release note, here's this..

    Screenshot 2018-11-02 08.52.38.png

    I will try it against my Native Audio and summarize the findings for you soon!

    I got a phone which I am quite sure is slower than average (the Xiaomi Mi A2, new phone but with extremely high Unity audio latency. It likely received AudioTrack treatment), we will see what improvement Unity has this time.

    For a recap, Native Audio forces OpenSL on all phones without mixer but utililze multiple OpenSL tracks for last-stage mixing. Unity in the case of OpenSL enabled uses 1 OpenSL tracks but mixed audio stream from the application level.
     
  25. 5argon

    5argon

    Joined:
    Jun 10, 2013
    Posts:
    1,555
    I realized I forgot to report back my findings regarding to 2019.1.0 alpha audio improvement. Here it is :
    https://gametorrahod.com/unitys-android-audio-latency-improvement-in-2019-1-0-ebcffc31a947

    In short, devices that previously get slow track has a relaxed requirement to get fast track. But still Native Audio will have an edge in latency because NA does not mix, and also NA can play mid-frame.

    - Coming soon in v4.0.0 -

    Is that now you can use Unity-imported
    AudioClip
    as a data for Native Audio's load method. That means :

    1. You can use OGG! I don't even have to add a native ogg decoder at native side!
    2. You can use the same audio as you hear in editor!
    3. Use Unity's audio importer to smash down the quality as you like and you will get a low latency version of that!
    4. Also time to ditch the StreamingAssets!

    Caveats are that it must be "Decompress On Load" to allow NA to read audio data at all. And also there will be a moment that the audio data take 3x memory space : when AudioClip is loaded in memory, a data array from GetData, and finally data copied to keep at native side. Afterwards you can free up the audio data at C# side to leave just the uncompressed PCM data at native side.

    This version seems to work by itself right now so rest assure that this feature is possible and not just a concept, but is now in testing and documentation phase. I planned it to hit the store around Christmas.

    Leftmost version number increase means there will be some breaking API changes.

    - Talk at Unity@Bangkok 2018 -

    I have recently talked about Android audio latency at an event Unity@Bangkok 2018. (https://www.eventbrite.com/e/unitybangkok-2018-tickets-49373492445) They recorded the session, if it is available I would post a link to it later.

    IMG_20181114_081843.jpg
     
    eSmus1c likes this.
  26. 5argon

    5argon

    Joined:
    Jun 10, 2013
    Posts:
    1,555
    Happy new year, here's my new year gift for all of you : Version 4.0.0 is out!

    Screenshot_20190102-2008172.png


    The website has been updated with the new "
    AudioClip
    workflow". Please check the changelog here : http://exceed7.com/native-audio/changelog.html or if you prefer to read in markdown format..

    # [4.0.0]

    ## Added

    ### [All Platforms] New load API : `NativeAudio.Load(AudioClip)`


    You are now freed from `StreamingAssets` folder, because you can give data to NativeAudio via Unity-loaded `AudioClip`.

    Here's how loading this way works, it is quite costly but convenient nonetheless :

    - It uses `audioClip.GetData` to get a float array of PCM data.
    - That float array is converted to a byte array which represent 16-bit per sample PCM audio.
    - The byte array is sent to native side. NativeAudio **copy** those bytes and keep at native side. You are then safe to release the bytes at Unity side without affecting native data.
    - Thus it definitely takes more time than the old `StreamingAssets` folder way. Your game might hiccups a bit since the copy is synchronous. Do this in a loading scene.

    This is now the recommeded way of loading audio, it allows a platform like PC which Native Audio does not support to use the same imported audio file as Android and iOS. Also for the tech-savvy you can use the newest Addressables Asset System to load audio from anywhere (local or remote) and use it with Native Audio once you get a hold of that as an `AudioClip`.

    Hard requirements :

    - Load type **MUST be Decompress On Load** so Native Audio could read raw PCM byte array from your compressed audio.
    - If you use Load In Background, you must call `audioClip.LoadAudioData()` beforehand and ensure that `audioClip.loadState` is `AudioDataLoadState.Loaded` before calling `NativeAudio.Load`. Otherwise it would throw an exception. If you are not using Load In Background but also not using Preload Audio Data, Native Audio can load for you if not yet loaded.
    - Must not be ambisonic.

    In the Unity's importer, it works with all compression formats, force to mono, overriding to any sample rate, and quality slider.

    The old `NativeAudio.Load(string audioPath)` is now documented as an advanced use method. You should not require it anymore in most cases.

    ### [All Platforms] OGG support added via `NativeAudio.Load(AudioClip)`

    From the previous point, being able to send data from Unity meaning that we can now use OGG. I don't even have to write my own native OGG decoder!

    The load type must be **Decompress on Load** to enable decompressed raw PCM data to be read before sending to Native Audio. This means on the moment you load, it will consume full PCM data in Unity on the read **and** also full PCM data again in native side, resulting in double uncompressed memory cost. You can call `audioClip.UnloadAudioData` afterwards to free up memory of managed side leaving just the uncompressed native memory.

    OGG support is not implemented for the old `NativeAudio.Load(string audioPath)`. An error has been added to throw when you use a string path with ".ogg" to prevent misuse.

    ### [iOS] Resampler added, but not enabled yet

    I have added `libsamplerate` integration to the native side but not activate it yet.

    Now you can load an audio of any sampling rate. Currently I don't have an information what is the best sampling rate (latency-wise) for each iOS device, now I left the audio alone at imported rate.

    Combined with the previous points, you are free to use any sampling rate override import settings specified in Unity.

    ### [All Platforms] Mono support added

    - When you loads a 1 channel audio, it will be duplicated into 2 channels (stereo) in the memory. Mono saves space only on the device and not in-memory.
    - Combined with the previous points, you are free to use the `Force To Mono` Unity importer checkbox.

    ### [Android] NativeAudio.GetDeviceAudioInformation()

    It returns audio feature information of an Android phone. [Superpowered is hosting a nice database of these information of various phones.](https://superpowered.com/latency).

    Native Audio is already instantiating a good Audio Track based on these information, but you could use it in other way such as enforing your Unity DSP buffer size to be in line with the phone, etc. There is a case that Unity's "Best Latency" results in a buffer size guess that is too low it made Unity-played audio slow down and glitches out.

    ## Changed

    ### `LoadOptions.androidResamplingQuality` renamed to `LoadOptions.resamplingQuality`

    Because now iOS can also resample your audio.

    ## Removed

    ### [EXPERIMENTAL] Native Audio Generator removed

    It just here for 1 version but now that the recommended way is to load via Unity's importer this is not worth it to maintain anymore. (That's why I marked it as experimental!)
     
    Kiupe likes this.
  27. eSmus1c

    eSmus1c

    Joined:
    Apr 17, 2018
    Posts:
    10
    Greetings!

    We are developing a music-based rhythm game on iOS/Android and obviously faced with latency problem, especially on Android devices. Long story short, now we are using your Native Audio plugin for Unity. It works great on almost all devices we've tested (not much actually), except Huawei P20 Pro (model CLT-L29, processor HiSilicon Kirin 970, Android 9, EMUI 9.0.0, build 9.0.0.161 [C10E2R1P9]). It looks like Native Audio isn't work on this phone at all. It gives exactly the same poor latency with Native Audio or without it (pure Unity's audio with "best latency" setting). We are using Unity 2018.2.19f1 for now.

    To make sure this isn't our internal game problem, we've tested Huawei P20 Pro with your latest Android demo app. The result was exactly the same: no any latency difference between Native Audio ("Play Native" button) and Unity's audio ("Play Unity AudioSource" button). When I press these two buttons simultaneously, the demo app plays the sound simultaneously too with the same latency. Also, this table (https://superpowered.com/latency) is showing poor audio latency on P20 Pro (around 200 ms, sic!).

    The list of our tested devices that works great with Native Audio with acceptable and playable latency (around ~70-80 ms):
    Xiaomi Redmi 4 Pro (Android 6.0.1, MIUI 9.6.2.0)
    Xiaomi Redmi 4X (Android 7.1, MIUI 10.1.1.0)
    Xiaomi Mi Mix 2 (Android 8.0, MIUI 10.0.2.0)
    Samsung Galaxy S7 Edge (Android 8.0)

    Is there any Kirin 970 related "feature" on Huawei P20 Pro that affects on Native Audio? Any thoughts/suggestions/fixes?

    Thank you.
     
  28. 5argon

    5argon

    Joined:
    Jun 10, 2013
    Posts:
    1,555
    Hello,

    I have not heard of any case that NA is ineffective before, the previous cases were either usable or crash which I had fixed.

    One of user in the Discord have reported that MeiZu Note5 and Huawei Mate 20 Pro has a Unity audio problem which Best Latency fails and produce a noise. (That is, the automatically selected buffer size is too low that it causes buffer underrun). When using Native Audio, the problem may or may not fix itself. NA uses the buffer size that the phone told (the "optimal size") but apparently the optimal size still fails in some phone. MeiZu is one such phone.

    I would like to get my hand on P20 or Mate 20 pro someday, since it is available here. Thanks for the report!
     
  29. eSmus1c

    eSmus1c

    Joined:
    Apr 17, 2018
    Posts:
    10
    Thanks for reply.

    So, as far as I understand, the main problem with Huawei P20 Pro is the poor phone audio buffer size (960 samples, according to Superpowered's table), and Native Audio can't do anything here. Am I right?

    The weird thing is, why there is literally no latency difference between "Play Native" button and "Play Unity AudioSource" button in your demo app on P20 Pro (it's about ~200-250 ms on both buttons)? For me, it looks like NA is ineffective and just isn't working here.

    It turns out that NA is not like "super-universal" solution for reducing Unity's audio latency, and it strongly depends on some "native" phone features, right? Also, is there any way to reduce somehow native phone buffer size?

    BTW, there is no any noise with Unity's "best latency" setting on P20 Pro, like you described about Mate 20 Pro and Meizu Note 5.
     
  30. 5argon

    5argon

    Joined:
    Jun 10, 2013
    Posts:
    1,555
    NA will ask the phone for native sampling rate/buffer size and make many audio sources for play. Each one represent one sound. Using OpenSLES.

    Unity will use a certain buffer size (algorithm unknown, the only setting is that Best Latency) and if it is in a certain criteria it also uses OpenSLES, otherwise use a slower but safer Java AudioTrack (Read more https://gametorrahod.com/unitys-android-audio-latency-improvement-in-2019-1-0-ebcffc31a947 ) Unity ask for only 1 audio source, but mix in application level in order to use that one source to play virtually unlimited concurrent sounds.

    If both Unity and NA uses similar buffer size and both uses OpenSLES, then the advantage NA has left over Unity is that the audio does not mix while Unity always has to go through internal FMOD mixer. That's why I am surprised when you said the time was equal, since at the very least you would have advantage of skipping the mixer even if everything else are the same...
     
  31. eSmus1c

    eSmus1c

    Joined:
    Apr 17, 2018
    Posts:
    10
    I agree, strange surprising thing is happening. I guess, you should find P20 Pro somewhere and test it by yourself.
     
  32. eSmus1c

    eSmus1c

    Joined:
    Apr 17, 2018
    Posts:
    10
    Hello again. I brought some details and test samples on Huawei P20 Pro. I measured time between fingernail click and actual sound from phone speaker. The microphone was as close as possible to the phone speaker.

    1. Your NativeAudio Android demo app, pressed on "Play Native" button: http://puu.sh/CQ6kE.wav
    Results: minimum 205 ms, maximum 241 ms, average 220 ms.

    2. Your NativeAudio Android demo app, pressed on "Play Native" button and "Play Unity AudioSource" button simultaneously (you can actually hear doubled fingernail sound and doubled respond sound): http://puu.sh/CQ6kN.wav
    Results: ~ the same as above.

    3. Our game on P20 Pro with your NativeAudio plugin, pressed on "Play Native" button: http://puu.sh/CQ6m9.wav
    Results: minimum 214 ms, maximum 227 ms, average 221 ms.

    4. I found that "Opsu!" mobile rhythm game (https://play.google.com/store/apps/details?id=fluddokt.opsu.android) has slightly lower latency, but still not acceptable. I don't know what method of reducing latency they are using. This game is an open-source and written in Java: http://puu.sh/CQ6m9.wav
    Results: minimum 182 ms, maximum 212 ms, average 194 ms.

    For comparison, I recorded the same tests on my Xiaomi Redmi 4 Prime and got this:
    1. Your NativeAudio Android demo app, pressed on "Play Native" button: minimum 52 ms, maximum 74 ms, average 62 ms.
    2. Our game with your NativeAudio plugin, pressed on "Play Native" button: minimum 60 ms, maximum 106 ms, average 83 ms.
    3. "Opsu!" game: minimum 80 ms, maximum 121 ms, average 94 ms.

    Any thoughts?

    Thanks.
     
  33. 5argon

    5argon

    Joined:
    Jun 10, 2013
    Posts:
    1,555
    There's definitely something going on with P20. In my Discord server, 2 developers had reported that many MeiZu phone and Huawei Mate 20 / 20X / P20 has audio buffer size problem on both Unity AND Native Audio's chosen buffer size. It is too low that it cause underrun. Still nothing related to latency number since small buffer = low latency but it shows that these phones may have something surrounding them.

    The best though would be I have to get my hand on those phones but it is not really feasible for me to buy all the flagship device for hunting bugs.. but if you want to continue this research could you try `adb logcat` while playing the audio and see if something comes up? Also you may try `adb shell dumpsys media.audio_flinger` immediately after playing an audio and paste the log here. (Or in the Discord server)
     
  34. eSmus1c

    eSmus1c

    Joined:
    Apr 17, 2018
    Posts:
    10
    According to Superpowered's table (https://superpowered.com/latency) and my own tests, Huawei P20 Pro's audio buffer size is 960 samples: http://puu.sh/CRrta.png. I can't say this is "small buffer". As you can see, it causes ~200 ms round-trip audio latency. For example, my Xiaomi Redmi 4 Prime has just 192 samples and ~40 ms latency.

    We surely can continue this research, but first: P20 Pro is the own phone of my boss, so I can't test it A LOT. Second: I'm sound designer/engineer and not programmer-developer, so I'm not sure where I should paste "adb logcat" or "adb shell dumpsys media.audio_flinger". If you'll give me detailed instruction what to do, I think I can help you.
     
  35. 5argon

    5argon

    Joined:
    Jun 10, 2013
    Posts:
    1,555
    Ok, the "adb logcat" and "adb shell dumpsys media.audio_flinger" is a Terminal/Command Prompt command. First you have to install the Android Debug Bridge (https://www.xda-developers.com/install-adb-windows-macos-linux/). After that if you have USB connection to the phone while running those command in your terminal shell, it would ask something from the phone and report back.

    If you know that the number is as high as 960, you can definitely try to forcefully lower it when you initialize Native Audio and see if it is able to make the audio faster while not causing buffer underrun or not. If the audio is faster but sounds broken, then that means 960 is BOTH the safe size and the fastest size. I hope it could go a bit lower, but currently it looks like both Unity and Native Audio is respecting that large amount. (If this phone is popular among players, then maybe you could `if` in the code to look for specifically this phone after you confirmed the smaller buffer size that works.)

    Also I would like to report that both Mate 20 and 20X has a buffer size of 240, much smaller but similar audio problem occurs.

    53142131_2209766109043983_2747447084117393408_n.jpg
     
  36. prabubejoslamet

    prabubejoslamet

    Joined:
    Feb 1, 2016
    Posts:
    2
    Hi, 5argon! Your tool is just amazing. I am developing a rhythm-based mechanic for my game, had a big trouble on Android latency and your tool saves the day!

    Got a question: I use .audioPlayerIndex for looping the BGM, very successful on Android but not on iOS. The index randomly returns to zero, resulting in looping from the intro music. Any suggestions?

    Thanks in advance!
     
  37. luong_pham

    luong_pham

    Joined:
    Jan 2, 2019
    Posts:
    16
    Hi @5argon , i have some problems with Soundpool lost sound on some devices. I wonder if your plugin would have the same problem?
     
  38. 5argon

    5argon

    Joined:
    Jun 10, 2013
    Posts:
    1,555
    Hello,

    I am not sure what do you mean by looping with the index.. and what is "looping from the intro music"? What technique did you use? Are you using the provided looping function?

    Basically to loop, you play an audio with loop argument + hard specify the player index (for example 0). Then to prevent other audio from stopping the loop and overwrite the audio source, you have to be careful and always hard specify the index so that it avoids the looping sound. (Because without specifying the index, the index will be round robin, 0 1 2 3 ... max then back to 0. Maybe this is what you are referring to) If you have 5 sources total, you would have to manage your own round robin algorithm so that it goes 1 2 3 4 1 2 3 4 and not touching the 0 source, because that is looping.

    Technically SoundPool goes to the same core (OpenSL ES) as Native Audio. If that problem happen in SoundPool level then Native Audio can help. But if that problem is at OpenSL ES level then it bounds to be the same. What do you mean by "lost sound"? If you meant completely silence, I have never experienced one before with Native Audio. However if you meant glitching audio, that's because of buffer size too low and on some device that happens.
     
  39. prabubejoslamet

    prabubejoslamet

    Joined:
    Feb 1, 2016
    Posts:
    2
    Sorry for not being clear. I meant loop as in introloop-ing my BGM by specifying it at index 0, then SetPlaybackTime(x). At occasion, when I called GetPlaybackTime(), the value returns to zero.

    Turns out I made a mistake by setting one of the sound effects at the same index 0, thus the music was overriden.
     
  40. eSmus1c

    eSmus1c

    Joined:
    Apr 17, 2018
    Posts:
    10
    Hey, @5argon.
    I tried to do everything like you described. Please, check these two logs from my Xiaomi Redmi 4 Prime. If these logs are correct, I'll try to do the same on P20 Pro.
     

    Attached Files:

  41. luong_pham

    luong_pham

    Joined:
    Jan 2, 2019
    Posts:
    16
    Yes What i mean "lost sound" is when we load a sound without error (or it cannot load correctly, i'm not sure). When playing it, nothing occur (no sound), but for some sound it still play (so only lost few sound). And it only happen on few devices. Why i ask you this question is when i ask someone if they know about your plugin, they tell me that you "mention" you have the same problem (not load sound correctly) somewhere in your document <- i just want to make sure about this from you

    p/s: how many sound you can play simultaneous?
     
    Last edited: Mar 12, 2019
  42. BachmannT

    BachmannT

    Joined:
    Nov 20, 2016
    Posts:
    386
    hello 5argon
    I'm very interested by your work! My goal is to build a synthesizer based on wavetable. All seems good on PC but I got some issues with Android.
    First, the latency of course and NA give a response !
    The second, Soundpool is unable to play a wave with the loopstart and loopend wav attributes. It can only repeat the sound from the start ... which is not very good for piano or sax sound!
    I would like to know if NA is able to do that ?
    I have also some others questions:
    Is NA able to change pitch when sound played ?
    Is NA able to play a lot of sounds simultaneously (but for short times, like a note) ? have you an idea of the limit ?
    Thank a lot
    Thierry
     
  43. 5argon

    5argon

    Joined:
    Jun 10, 2013
    Posts:
    1,555
    Native Audio is also super simple and could not do that.. all it did is putting some audio bytes onto some memory area. Then when you want to play it, select one of available native sources (player) to run over that memory area. All the wav header and such had already disappeared. NA do have a loop function, but setting loop start and end is currently no available.

    I do have some idea how it could work manually, specifically, the "run over" I talked about is a chain of alternating callback that put a little bit of audio on the player. (double buffering) If I made a variable that says jump to this buffer once arrive at this buffer, it would get you this behaviour.

    Also OpenSL ES do have that function. From the spec https://www.khronos.org/registry/OpenSL-ES/specs/OpenSL_ES_Specification_1.0.1.pdf page 399 (of the PDF, 385 printed) you will see SetLoop method with the description exactly what you want. Essentially enabling "native Introloop". (Introloop is one other of my plugin which do loopstart and loopend, but not native and assembled from Unity API ensuring cross-platform looping with intro)

    That said currently NA couldn't do it. : (


    Depending on which pitch change, this is achieved by different mean
    1. Change pitch by changing sampling rate of the player : this shorten/lenghten the audio and produce chipmunk/monster effect. Sampling rate of player on NA Android is strictly must be fixed to device's optimal rate to allow the special native FAST TRACK. So I cannot do it even if I want to because it would defeat the appeal of NA.
    2. Change pitch by modifying the audio itself : This will not violating FAST TRACK, but currently not available on NA nor likely to be available. Also by changing the audio it would not be on the fly, but require some time to transform the byte array and store it as a new pitch-shifted audio memory.
    3. Change pitch naturally that the audio length stay the same but what you heard goes up in pitch, like in DAW software : This one I have no idea how it works but may require fourier transform and back, very unlikely to be in a time sensitive plugin like this.

    So in summary still NA could not do pitch shifting, and unlikely to be implemented : ( It is better suited to full blown audio pipeline system like what Unity already provide.

    Native Audio is based on no mix principle. Just choose 1 native player, to run over already prepared audio memory. This is different from Unity that Unity mixes various audio you ask it to play to one summation stream, then Unity have only single player that run over that stream. That is very flexible, but mixing adds latency. For NA, you instead mix by having multiple sources instead. So the amount of concurrency depends on how many Android native sources you are willing to go. It is defaulted at 3. Over 5~7 you will get slow sources. Over 15 some device stop giving you more. It is quite risky to go high. With 3, you can play for example, chord of C E G simultaneously. But for something like piano each key would have a very long tail. The following notes will obviously cut off some prior sounds because you couldn't mix with unfinished sound. You will have to manually plan your source usage, or else your case is not suited to NA because NA could not mix.

    Alternatively you might try Superpowered SDK. They have a lot of bell and whistles while still native enough and goes to the same OpenSL ES as Native Audio. But I cannot support you more than this as I have failed trying to use their SDK before, and the dev is not so responsive.

    An another alternative is using Oboe. It is an API from Google which do the OpenSL ES/AAudio switching for you. There is an Asset Store item that port Oboe for Unity too. https://assetstore.unity.com/packages/tools/audio/oboe-for-unity-134705 I don't know much details about Oboe or Oboe for Unity, but recalled in the AAudio spec has mixer (on the slower mode, AAudio has an another "direct" mode which is very interesting and someday NA could be migrated to be based on Oboe).
     
  44. BachmannT

    BachmannT

    Joined:
    Nov 20, 2016
    Posts:
    386
    Whaoo ! what an awesome answer ! Thank a lot your help. I'm the editor of Midi Player Tool Kit and I'm quite disappointed with the Android capabilities with the audio. I will explore your tracks.
    Thank a lot
    Thierry
     
  45. 5argon

    5argon

    Joined:
    Jun 10, 2013
    Posts:
    1,555
    Oh, you are working with MIDI? It has nothing to do with NA, but OpenSL ES do have several API that could handle MIDI. I think it worths exploring and maybe make your own "native MIDI" on Android. It says when the MIDI hit some notes it would go to the normal OpenSL ES methods (which NA is also using) The SLPlaybackRateItf seen in the pic is the one that can do pitch shift by speed up / slow down.

    Screenshot 2019-03-27 15.52.10.png

    Also look at this quick reference card https://www.khronos.org/files/opensl-es-1-1-quick-reference.pdf, there is even something like setting multiple loop points, but only for MIDI player object.

    Screenshot 2019-03-27 15.53.02.png
     
  46. BachmannT

    BachmannT

    Joined:
    Nov 20, 2016
    Posts:
    386
    Thank for these information. I will look in detail.
    Thierry
     
  47. Kiupe

    Kiupe

    Joined:
    Feb 1, 2013
    Posts:
    528
    Hello,

    I'm having a weird issue on Android (not tested yet on iOS). During a specific sequence in my game I play a loop and the player can plays a virtual keyboard. The loop and the keyboard SFXs are played using Native Audio and by managing track index in order to not override an already "playing" trackIndex.

    This seems to works fine on two mobiles:
    - One Plus 5T (Android 9)
    - Xperia Z1 Compact (Android 9)

    But on two different pad it does not work:
    - Nexus 9 (Android 7.1)
    - Huawei MediaPad M5 Lite 10 (Android 8.0)

    The loop will stop playing when pressing very fast the keyboard keys. What's look weird is that the trackIndex used for the loop (0) is not (seems to not be) reused.

    So, it works well on some devices but not all, a sound is stop but it seems that its trackIndex is not re-used.

    Any idea what could explain that ?

    ps: I'm using Native Audio 3.0.
     
  48. 5argon

    5argon

    Joined:
    Jun 10, 2013
    Posts:
    1,555
    Indeed I have no scene that test a loop running while other sounds is playable. I will make this scene to find out if it occurs on any of my device.

    I can't think of any reason honestly. Since the looping function added to the present 4.3.0 I have not touch anything related to the selected track for play algorithm. Could you make it happen while connecting to ADB logcat and see if any strange log comes up?
     
  49. Kiupe

    Kiupe

    Joined:
    Feb 1, 2013
    Posts:
    528
    Hello,

    I do not use the 4.3.0 version but the 3.0.0 version. What I find strange is that it works on some devices and can't figure out why. If you have an idea it please let me know.

    Thanks
     
  50. eSmus1c

    eSmus1c

    Joined:
    Apr 17, 2018
    Posts:
    10
    Good day, @5argon. Would you be so kind to watch my logs that I posted above? Thanks.