Search Unity

Official DOTS Audio Discussion

Discussion in 'Entity Component System' started by harini, Mar 28, 2019.

  1. zollenz

    zollenz

    Unity Technologies

    Joined:
    Jun 4, 2019
    Posts:
    26
    DSPGraph is still in preview/WIP state with all the associated disclaimers. DOTS Audio is WIP but no concrete news/dates yet.

    Your points re: stability/samples/documentation are valid and acknowledged. These are also definite points of focus for the audio team.

    No, sorry. See above.
     
  2. zollenz

    zollenz

    Unity Technologies

    Joined:
    Jun 4, 2019
    Posts:
    26
    The reason why the DSP Graph samples don’t include ECS code is because DSP Graph is a standalone DOTS-agnostic audio rendering engine.

    I tried loading your project (2019 MacBook Pro). It also crashed for me, and the stack trace is not giving me much to go on.

    Without the time to do a more thorough investigation, I would say that it would be much simpler for you to use the AudioClip directly in ECS (see below) and pass that to the DSP Graph API as shown in the PlayClip sample.

    You can actually have a managed component via AddComponentObject / GetComponentObject.
    But it's true that jobs using these managed components cannot be compiled by Burst.
    Also, you can add managed types to SharedComponentData.

    AFAIK a blob asset reference cannot contain managed types.
     
    Last edited: May 19, 2020
    OneAndOneIsTwo and imaginadio like this.
  3. zollenz

    zollenz

    Unity Technologies

    Joined:
    Jun 4, 2019
    Posts:
    26
    • DSPGraph predates Data Flow Graph
    • DFG is currently restrained to main thread scheduling, which would not work for DSPGraph
    • Even if the graph rendering in Data Flow Graph is async, it still syncs once every frame
    • The Data Flow Graph API is not fully Burst-compatible yet
    But you are right, they are kindred. Who knows what the future will bring ;)
     
    Last edited: May 19, 2020
    Stroustrup likes this.
  4. zollenz

    zollenz

    Unity Technologies

    Joined:
    Jun 4, 2019
    Posts:
    26
  5. zollenz

    zollenz

    Unity Technologies

    Joined:
    Jun 4, 2019
    Posts:
    26
    React to game events in your game code (ECS or even monobehaviours) and then map those events to meaningful calls to the DSPCommandBlock API (e.g. object position is X, call SetAttenuation with value Y). Your commands will be executed when DSPGraph is ready to process them. That’s one-way communication, so you need to have your intent in place at the time of adding your command. You can use the AddAttenuationKey and AddFloatKey to schedule changes for a certain time, on a sample accurate level.

    CreateUpdateRequest (and AddNodeEventHandler) is for the reverse use case: reacting to something that happened in DSPGraph. This could be “I finished playing” (as in the PlayClip example) or “here are the latest output buffer values” for metering purposes.

    While you could use UpdateAudioKernel, it has more overhead than SetFloat/SetAttenuation since it actually creates a job. So that’s more suited for when you need to pass a custom payload.
     
    Last edited: May 19, 2020
    Stroustrup likes this.
  6. JamesWjRose

    JamesWjRose

    Joined:
    Apr 13, 2017
    Posts:
    687
    Ok, thanks for the update. Have a good day.
     
  7. zollenz

    zollenz

    Unity Technologies

    Joined:
    Jun 4, 2019
    Posts:
    26
    Awesome :)

    Yes, at a quick glance this seems OK from the DSPGraph side of things. However, if you want it to fit in a broader ECS context you’d probably want to move your data (i.e. the nodes and connections that are currently in lists/dictionaries) to IComponentData and split your work into multiple systems/jobs. But those are more general DOTS implementation details.
     
    florianhanke likes this.
  8. Game-Savvy

    Game-Savvy

    Joined:
    May 3, 2017
    Posts:
    4
    Hi there, I was wondering, since there is really cool new Audio Engine (DSPGraph), DOTS based.
    Does the new AudioEngine take into account, interaural time difference? and if it doesnt.
    Would it be possible to have something like this?:
    2 Audio listeners, 1 left 1 right ear, and via DOTS calculate the interaural time difference, to play the "same sound" (after interaural level diff calculations).
    This would be similar to the Physics Scenes, that maybe we could have a scene for audio that runs at a much higher FPS, or at least not be FrameRate dependant, that could allow us to calculate some sort of stepped interaural time difference and apply that delay to the respective ear.
    If Anyone has an idea if this is even possible, I would love to hear a bit about it!
    Thanks.
     
  9. florianhanke

    florianhanke

    Joined:
    Jun 8, 2018
    Posts:
    426
    Great point, thanks! :)

    What I am missing most is pre-made specialized nodes, for example doppler effect or 7.1. Not that I mind making them myself, but I prefer something made by audio experts.
     
    deus0 and Nothke like this.
  10. deus0

    deus0

    Joined:
    May 12, 2015
    Posts:
    256
    Hi, I've been through all the code now. Thanks for all the examples ^^

    Just wondering if it's possible to push data to a kernel, specifically i would prefer to generate all by audio floats inside job systems. Because I want the audio to integrate more with my other components and thus more with my game.

    So I would need to be able to push these float arrays into the audio data directly into the audio kernel, rather then, as the examples do, generating it inside the kernel instead. For example, the midi player that was linked on the above github, has to put all the things inside one kernel. Which I find super messy. While we can only push floats into the kernel one at a time. I'm sure I missed it somewhere in all the code, where we can push directly into it.

    I would like to set the below buffer directly in a job system.
    'var sampleBuffer = context.Outputs.GetSampleBuffer(outIdx);
    var buffer = sampleBuffer.Buffer;'

    My idea was I can have one audio channel buffer, then for any audio component in my world I can apply 3D positioning based on audio listener components (position and exponential dropoff curve) and any other audio post processing. Then mix them all all together, and finally pushing the channel buffer into the IAudioKernel for output. Of course I would limit the amount of sounds based on the closest ones to the listener.

    Edit: Another option is pushing in structs into the kernel instead of float settings. The midi solution involved keeping all the same data in the node at once which isnt going to work for anything more complex. Something like jobs systems injection where i can crunch through a stack of data and convert that into a audio output buffer.
     
    Last edited: Jun 18, 2020
  11. lclemens

    lclemens

    Joined:
    Feb 15, 2020
    Posts:
    761
    The package contains an audio sample that plays a clip via a monobehavior. Are there any examples of playing clips attached to ECS entities in a 3D world? I'm especially looking for one that would have an example of some weapons firing and impacts as well as an ambient sound or two.
     
    BigRookGames and deus0 like this.
  12. lclemens

    lclemens

    Joined:
    Feb 15, 2020
    Posts:
    761
    ***crickets chirping***
     
  13. sietse85

    sietse85

    Joined:
    Feb 22, 2019
    Posts:
    99
    I also want to know when it's possible to use the audio components that we have converting to ECS components. What is an estimation for 'high' level DOTS-audio?
     
    lclemens and Bastienre4 like this.
  14. florianhanke

    florianhanke

    Joined:
    Jun 8, 2018
    Posts:
    426
    Maybe have a look at what I posted earlier? https://forum.unity.com/threads/dots-audio-discussion.651982/page-4#post-5840713 My goal was spatialized sound. I'm mostly posting it so you can look at a possible implementation.
     
    deus0 and sietse85 like this.
  15. sietse85

    sietse85

    Joined:
    Feb 22, 2019
    Posts:
    99
    @florianhanke nice job. It seems a lot of code however. I was looking for a solution which involves just using the normal conversion workflow to have our normal audio components being converted to ECS components which you then can use in component systems. But i guess that i have to wait for it longer.

    edit: Just noticed what was said by Unity Members on top this page... waiting time ^^
     
    Last edited: Jul 8, 2020
    florianhanke and deus0 like this.
  16. deus0

    deus0

    Joined:
    May 12, 2015
    Posts:
    256
    Well^ I dont mind converting and making the ECS's myself. I just want something that lets me push the audio data into the audio kernel. Making some ECS components and systems is more trivial I think. Of course that would save us all loads of work but I would rather start converting audio modules into ecs architecture sooner then later :)
     
  17. florianhanke

    florianhanke

    Joined:
    Jun 8, 2018
    Posts:
    426
    Ah, I should have been clearer about what part of the code may be relevant. I use my OneShot IComponents with a simple soundId that can be mapped to an AudioClip. The OneShot components which I create from a conversion workflow resulting in an entity with OneShot and Translation component).

    You could then play them (e.g. OneShots) from a system like so:

    Code (CSharp):
    1. protected override void OnUpdate()
    2. {
    3.     Entities.
    4.         ForEach(
    5.                 (Entity entity, ref OneShot oneShot, in Translation translation) =>
    6.                 {
    7.                     // An enum id identifying the sound to be played.
    8.                     var soundsId = oneShot.Value;
    9.                  
    10.                     // Get AudioClip from soundsId to AudioClip mapping.
    11.                     // Or use whatever mapping you prefer.
    12.                     var audioClip = ...;
    13.  
    14.                     // Play the sound.
    15.                     AudioSource.PlayClipAtPoint(audioClip, translation);
    16.                 }
    17.                ).
    18.         WithName("OneShotSystem").
    19.         WithoutBurst().
    20.         Run();
    21. }
    This is the simplest example I can think of – you may need something more involved.

    Edit: Forgot to add that for the above system to work you need a OneShotCleanupSystem which sets the OneShot#Value to Sounds.None when it isn't None already, to make it a truly "one-shot" sound.

    Code (CSharp):
    1. public class OneShotCleanupSystem : SystemBase
    2. {
    3.     protected override void OnUpdate()
    4.     {
    5.         Entities.
    6.             ForEach(
    7.                     (ref NetworkedOneShot oneShot) =>
    8.                     {
    9.                         // Do not force a structural change.
    10.                         oneShot.Value = (int) Sounds.Id.None;
    11.                     }
    12.                    ).
    13.             Schedule();
    14.     }
    15. }
     
    Last edited: Jul 18, 2020
    deus0 and sietse85 like this.
  18. deus0

    deus0

    Joined:
    May 12, 2015
    Posts:
    256
    @florianhanke The only issue with that is it can't run from a job system, and you can't edit the data in a job system. But that code does look better then the previous code I had that created a new gameobject for a sound. Many Thanks.
     
  19. florianhanke

    florianhanke

    Joined:
    Jun 8, 2018
    Posts:
    426
    @deus0 Which data can't you edit from a job system?
     
    Last edited: Jul 18, 2020
    deus0 likes this.
  20. Srokaaa

    Srokaaa

    Joined:
    Sep 18, 2018
    Posts:
    169
    Hey, I created a system to bridge old Unity audio to ECS. It works by pooling regular AudioSource objects and synchchronizing their position with ECS entities. If anyone needs a temporary solution until ecs audio package is released feel free to use it :)

    https://github.com/Sroka/EcsAudioBridge

    It has one major flaw tough. During conversion it uses a singleton to store references to AudioClips and AudioMixerGroups. Because of that it might not work when using SubScenes. If anyone knows if it is possible to store reference to AudioClip within ECS component itself I would be thankful for some hints
     
  21. thelebaron

    thelebaron

    Joined:
    Jun 2, 2013
    Posts:
    857
    Its been a really long time since the last package update for DSPGraph, kinda wondering whats going on with it?
     
    RobJellinghaus and deus0 like this.
  22. Stroustrup

    Stroustrup

    Joined:
    May 18, 2020
    Posts:
    142
    getting this error when attempting to build with il2cpp

     

    Attached Files:

  23. laurentlavigne

    laurentlavigne

    Joined:
    Aug 16, 2012
    Posts:
    6,327
    Is spatialization as good as Googled Resonance?
     
    deus0 likes this.
  24. Hyp-X

    Hyp-X

    Joined:
    Jun 24, 2015
    Posts:
    438
    Unfortunately this is a know error for more than 9 months...

    The best you can do until there's an update is: unpack
    com.unity.media.utilities
    and fix the
    GrowableBuffer.IndexOf
    function.

    My solution is:

    Code (CSharp):
    1.         public int IndexOf(T item)
    2.         {
    3.             for (int index = 0; index < *m_Count; ++index)
    4.                 if (UnsafeUtility.MemCmp(&item, *m_Array + index, UnsafeUtility.SizeOf<T>()) == 0)
    5.                     return index;
    6.             return -1;
    7.         }
    8.  
    Unity team has a different fix, but at least until we get a new release I'm sticking with my version.

    The original version relies on hash code, and I suspect hash collisions can cause unexpected behavior in the code that uses this class.
     
  25. Stroustrup

    Stroustrup

    Joined:
    May 18, 2020
    Posts:
    142
    errr.. how? do you have to edit \Library\ScriptAssemblies\Unity.Media.Utilities with dnspy or something?

    you used to be able to just control leftclick to go the definition and edit it there but at some point, they just started to reload everything when you make changes to the package
     
  26. deus0

    deus0

    Joined:
    May 12, 2015
    Posts:
    256
    In particular Audio Clip Data. I set up a system that stores the audio data with BlittableArrays. Then I compile the waves in job systems. After it's done processing i play it later by adding a component with PostUpdateCommands. A ComponentSystem then pulls the BlittableArray of floats and sets that into a new audio clip (which is played by a classic audio gameobject pool, while setting location, with audiosource.playoneshot).

    I was hoping that with the new ECS audio system, that I would push the data directly from a job. I'm not sure if there's any performance issues with audios playing like this, but I haven't ran into any yet while testing with 500ish characters.
     
  27. Hyp-X

    Hyp-X

    Joined:
    Jun 24, 2015
    Posts:
    438
    You don't have to edit the dll when you have the source.

    Find the package in Library\PackageCache and copy it to Packages
    Let Unity reimport and re-generate the project files.

    Now you can edit it easily.
     
    LISCINTEC, deus0 and Stroustrup like this.
  28. MiniBeatBoy

    MiniBeatBoy

    Joined:
    Sep 23, 2013
    Posts:
    15
    Where can I find the DSP Graph package? I can not find it in the Package Manager + Unity Registry + Show preview packages (2019.4.8f1).

    I haven't been able to find it anywhere in the internet either. I don't even see it listed in the Unity packages documentation. Was this cancelled? @zollenz

    upload_2020-8-22_0-0-51.png
     
    Last edited: Aug 23, 2020
  29. RobJellinghaus

    RobJellinghaus

    Joined:
    Jul 10, 2017
    Posts:
    17
    After being excited about this last year, the total lack of updates -- and the total lack of any commitment to there ever being any more updates -- means I'll be avoiding this for the foreseeable future. Looks like a great idea with huge possibilities, but Unity seems to be severely deprioritizing it based on observable evidence.
     
  30. bashis

    bashis

    Joined:
    Mar 18, 2013
    Posts:
    8
    Can we get a comment if this is still under development? I mean, this isn't even on the roadmap...
     
  31. florianhanke

    florianhanke

    Joined:
    Jun 8, 2018
    Posts:
    426
    Version 0.1.0-preview.13 has just been released :)
     
    RobJellinghaus likes this.
  32. Tak

    Tak

    Joined:
    Mar 8, 2010
    Posts:
    1,001
    Wow, you're monitoring closely!

    Sorry for the relative silence from our end - I've been on leave for the past few months, and I've just returned this week.

    Notable things in preview.13:
    • Sample buffers are now one buffer per channel instead of all channels interleaved on a per-sample basis in a single buffer (this is an api change, and will require you to adapt your audio kernels and drivers)
    • The minimum required Unity version is now 2020.1

    Full release notes:
    Code (csharp):
    1. ## [0.1.0-preview.13] - 2020-09-10
    2. ### Changes
    3. - Update dependencies
    4. ### Fixes
    5. - Fix uninitialized buffer access
    6. ## [0.1.0-preview.12] - 2020-02-20
    7. ### Changes
    8. - Migrate sample buffers to one buffer per channel
    9. - Update com.unity.media.utilities dependency to preview.4
    10. ### Improvements
    11. - Remove some allocations caused by boxing
    12. ### Fixes
    13. - Fix crash when using dspgraph and exiting unity via script
     
    Last edited: Sep 11, 2020
  33. florianhanke

    florianhanke

    Joined:
    Jun 8, 2018
    Posts:
    426
    Complete coincidence! Went to check for Audio updates after seeing bashis comment and there it was :D
    This is a nice change :)
     
    RobJellinghaus likes this.
  34. zollenz

    zollenz

    Unity Technologies

    Joined:
    Jun 4, 2019
    Posts:
    26
    This is fixed in the latest version of DSPGraph announced above.
     
    ScriptsEngineer likes this.
  35. zollenz

    zollenz

    Unity Technologies

    Joined:
    Jun 4, 2019
    Posts:
    26
    See post here re: managed data (e.g.
    AudioClip
    ) in ECS.

    What you would do is have a component like

    Code (CSharp):
    1. public class ManagedFoo : IComponentData
    2. {
    3.     public Foo Data;
    4. }
    Then you can pass
    ManagedFoo.Data
    from your ECS code to the DSPGraph API as shown in the sample. The thing to note here is that, contrary to unmanaged component data,
    ManagedFoo
    needs to be a
    class
    instead of a
    struct
    for this to work.
     
    Last edited: Sep 17, 2020
    BigRookGames likes this.
  36. zollenz

    zollenz

    Unity Technologies

    Joined:
    Jun 4, 2019
    Posts:
    26
    In Unity's current audio system, all spatialization plugins in Unity are developed and maintained by 3rd parties (Google, Oculus etc.). The native plugin API provides information about listener position/orientation and
    AudioSource
    position and the rest is up to the particular solution to implement.

    The new system will most likely follow the same approach where we provide the shared foundation and the actual implementation is done by 3rd parties.
     
    laurentlavigne likes this.
  37. zollenz

    zollenz

    Unity Technologies

    Joined:
    Jun 4, 2019
    Posts:
    26
    Currently, the API for exchanging data between audio and non-audio threads that will handle synchronization for you is:
    But you would still need to handle buffer under/overruns inside the kernel.
    Right now we don't have any built-in mechanisms for user-defined sample providers.
     
    Last edited: Dec 22, 2020
  38. laurentlavigne

    laurentlavigne

    Joined:
    Aug 16, 2012
    Posts:
    6,327
    Too bad, an out-of-the box spatial API would benefit everyone.
     
    cultureulterior likes this.
  39. camerondus

    camerondus

    Joined:
    Dec 15, 2018
    Posts:
    52
    hello, where do i go to find the dspgraph preview? im on unity 2020.1.4f1 HDRP and cannot see it in the package manager
     
  40. Tak

    Tak

    Joined:
    Mar 8, 2010
    Posts:
    1,001
    Experimental packages aren't listed in the UI yet. The easiest way to add dspgraph is to open the package manager window, click the + button in the upper left, and choose "Add package from Git URL…". Then enter com.unity.audio.dspgraph in the box.
     
    dongyiqi, deus0 and camerondus like this.
  41. DreamingImLatios

    DreamingImLatios

    Joined:
    Jun 3, 2017
    Posts:
    4,264
    I got sick of waiting for an ECS audio solution and decided to prototype one over the last couple of weekends. I wanted to provide some feedback about what I discovered.

    But first, a little bit about my solution for context, I typically have my projects structured to a single mega sync-point consisting of multiple synchronous systems which accounts for roughly 10-25% of my frame and under load 85-98% of my worker thread idle time. Because audio is one of those things that rarely needs to go back into the simulation (heck, it runs on its own thread), I wanted to have the bulk of the DSP processing (many voices) jobs to be dispatched from the main thread to execute during the sync point rather than have the audio thread fight for worker thread resources during the heavy simulation time points. I did this by making a driver that used Synchronous scheduling and a mixing kernel that received raw pointers to NativeList buffers to mix with some streamed tracks (mostly music). I used some IDs and counters to ensure the buffers were being disposed safely, and passed these buffers through a CommandBlock kernel update.

    While this was a prototype and has some prototype issues related to time constraints (nothing major), I did get this working and am pretty happy with how it behaves on a first attempt. But anyways, there's a few issues I ran into that I hope can be resolved in a future release.

    The first issue is that because of how my technique works, if the DSPGraph ticks between the start and end of the job DSP processing, I either have to sacrifice the first audio-frame's worth of samples for new one-shots or I have to delay them by a simulation update. While this is an inevitable issue to my approach, I would like to mitigate it by shortening the time between DSP start and DSP end as seen from the sync between job threads and the audio thread. I'm able to mitigate it on one side by using command blocks from a job. (This is awesome by the way!) However, what is the best way to atomically get the latest state of the kernel from within a job? I don't want to collect it from the main thread as there might be a significant delay between the main thread job scheduling and the actual DSP jobs running. The state is a small unmanaged struct.

    The second issue is that AudioClips are class types. For the sound effects, I converted these clips to BlobAssets. This works, but I lose compression. Any ideas for how to improve this? For music tracks, I ended up dispatching it from the main thread. This is fine for simple use cases, but for more interactive stuff I would like to be able to set the tracks from a Burst job. If I have to keep the actual AudioClip assets stored somewhere, that's fine. Maybe I need to have a sampler node for every audio clip and instead update which node is playing and feeding into the rest of the graph? Does this seem like a reasonable approach? If so, is there a way to reset a provider back to the start of the clip?

    The third issue is that there is no documentation on which API can be called from which thread, so perhaps some things are possible I don't realize.
     
    apkdev, deus0, Quatum1000 and 2 others like this.
  42. Tak

    Tak

    Joined:
    Mar 8, 2010
    Posts:
    1,001
    com.unity.audio.dspgraph 0.1.0-preview.16 is now live

    Code (csharp):
    1. ### Changes
    2. - Bump burst dependency to 1.3.7
    3. ### Fixes
    4. - Fix incorrect exceptions for graphs whose buffer size isn't a multiple of the channel count
    5. - Don't show test and sample code in API documentation
     
  43. andrew-lukasik

    andrew-lukasik

    Joined:
    Jan 31, 2013
    Posts:
    249
    Hi @Tak Please take a look at case 1305452, freshly reported.


    Stack Trace of Crashed Thread 27600:
    0x00007FF95C2CDC46 (mono-2.0-bdwgc) mono_get_runtime_build_info
    0x00007FF95C252902 (mono-2.0-bdwgc) mono_perfcounters_init
    0x00007FF95C25B95F (mono-2.0-bdwgc) mono_runtime_invoke
    0x00007FF93AC2691D (UnityPlayer) UnityMain
    0x00007FF93AC2327C (UnityPlayer) UnityMain
    0x00007FF93A327FFB (UnityPlayer) UnityMain
    0x00007FF93A327F34 (UnityPlayer) UnityMain
    0x00007FF93A71A374 (UnityPlayer) UnityMain
    0x00007FF93A719E32 (UnityPlayer) UnityMain
    ERROR: SymGetSymFromAddr64, GetLastError: 'Attempt to access invalid address.' (Address: 00007FF939D37982)
    0x00007FF939D37982 (UnityPlayer) (function-name not available)
    ERROR: SymGetSymFromAddr64, GetLastError: 'Attempt to access invalid address.' (Address: 00007FF939FCFFEA)
    0x00007FF939FCFFEA (UnityPlayer) (function-name not available)
    0x00007FF9593C16FF (lib_burst_generated) [...\Library\PackageCache\com.unity.audio.dspgraph@0.1.0-preview.16\Runtime\Interfaces\IDSPCommand.cs:37] Unity.Audio.DSPCommand.Schedule
    0x00007FF9593BAACF (lib_burst_generated) [...\Library\PackageCache\com.unity.audio.dspgraph@0.1.0-preview.16\Runtime\DSPGraph.cs:268] Unity.Audio.DSPGraph/OutputMixerHandle::Unity.Audio.DSPGraph.OutputMixerHandle.BeginMix
    0x00007FF9593BA18E (lib_burst_generated) [...\Library\PackageCache\com.unity.audio.dspgraph@0.1.0-preview.16\Runtime\Extensions\AudioOutputExtensions.cs:61] Unity.Audio.AudioOutputExtensions.AudioOutputHookStructProduce`1<Unity.Audio.DefaultDSPGraphDriver>.Execute
    0x00007FF9593C2291 (lib_burst_generated) 48b351eef39ddda672f56a6738434b01_avx2
    0x00007FF959553852 (lib_burst_generated) 48b351eef39ddda672f56a6738434b01
    0x00007FF93A73E7D2 (UnityPlayer) UnityMain
    ERROR: SymGetSymFromAddr64, GetLastError: 'Attempt to access invalid address.' (Address: 00007FF939D32201)
    0x00007FF939D32201 (UnityPlayer) (function-name not available)
    0x00007FF93AEA1A9D (UnityPlayer) UnityMain
    0x00007FF93B9C9FB4 (UnityPlayer) UnityMain
    0x00007FF93B978639 (UnityPlayer) UnityMain
    0x00007FF93B9C5294 (UnityPlayer) UnityMain
    0x00007FF93B950E06 (UnityPlayer) UnityMain
    0x00007FF93BE18588 (UnityPlayer) UnityMain
    0x00007FF9D6107034 (KERNEL32) BaseThreadInitThunk
    0x00007FF9D6DDD0D1 (ntdll) RtlUserThreadStart
     
    Last edited: Jan 12, 2021
  44. Tak

    Tak

    Joined:
    Mar 8, 2010
    Posts:
    1,001
    Thanks, I'll check it out
     
  45. Tak

    Tak

    Joined:
    Mar 8, 2010
    Posts:
    1,001
    It looks like you've hit a bug caused by a change to burst in unity 2020.2.0b14.
    The fix has landed in 2021.1.0b3, and a backport to 2020.2 is in progress.
    Workarounds:
    • Temporarily disable burst
    • Downgrade to unity 2020.2.0b13 or older
     
    andrew-lukasik likes this.
  46. thelebaron

    thelebaron

    Joined:
    Jun 2, 2013
    Posts:
    857
    Just trying out @clintaki example project(thank you btw) of DspGraph with ECS and I noticed upgrading it from 19.x to 2020.2.1f1 seems to have an odd effect with the sound? The pitch is all off and sounds distorted. I cant really tell if this is a dspgraph issue, or something with the project, or if its the editor, but does the current dspgraph package have a target version of unity in mind?
     
    MicCode likes this.
  47. Hyp-X

    Hyp-X

    Joined:
    Jun 24, 2015
    Posts:
    438
    Make sure to update the package to 0.1.0-preview.16
    The low level API changed in 2020.2 and older packages are not compatible
     
    thelebaron likes this.
  48. MicCode

    MicCode

    Joined:
    Nov 19, 2018
    Posts:
    59
    It seems the API and buffer layout has changed since @clintaki made the sample,
    I try to update it to 2020.2.1f1, it appear to work correctly after some changes.
    One problem I run into is that when the music loop, the left and right buffer size is not the same
    I'm not very familiar with audio programming, maybe someone can shad some light?

    Un-comment line 50-51 in AudioSampleDataReader.cs to show the problem
                        
    //if (buffer.Length != buffer2.Length)
    // throw new System.Exception($"{buffer.Length}!={buffer2.Length}");


    https://github.com/mic-code/dots-audio-sample
     
    keeponshading likes this.
  49. Hyp-X

    Hyp-X

    Joined:
    Jun 24, 2015
    Posts:
    438
    I guess you should use
    SampleBuffer.Samples
    to get the number of samples instead of querying the Length of the slice.
    This works for me.
    Also you have a
    output.GetBuffer(1).Length / 2
    in
    PlayAudioSampleReaderNode.cs
    that is most likely a mistake.
     
  50. Rich_XR

    Rich_XR

    Joined:
    Jan 28, 2020
    Posts:
    9
    Thanks for the latest release and continued support in this difficult time of Version 0.1.0-preview.13 of DSPgraph and also wanted to mention the work of @florianhanke incredible example code which incorporates a spatialisation approach. @zollenz you mentioned “new system will most likely follow the same approach” on 3rd party spatialisation (resonance for example) does this mean current 0.1.0-preview.13 can be spatialised by a 3rd party post DSPGraph? Any advice on the approach to this, even if hypothetical at this stage is greatly appreciated.
     
    deus0 and florianhanke like this.