Search Unity

  1. Welcome to the Unity Forums! Please take the time to read our Code of Conduct to familiarize yourself with the forum rules and how to post constructively.
  2. We’re making changes to the Unity Runtime Fee pricing policy that we announced on September 12th. Access our latest thread for more information!
    Dismiss Notice
  3. Dismiss Notice

Official DOTS Audio Discussion

Discussion in 'Entity Component System' started by harini, Mar 28, 2019.

  1. ZexScal4

    ZexScal4

    Joined:
    May 25, 2018
    Posts:
    8

    Thank you for that info. Happy Holidays!
     
    Zeroneth01 likes this.
  2. Tak

    Tak

    Unity Technologies

    Joined:
    Mar 8, 2010
    Posts:
    1,001
    The minimum supported version is 2019.2.8f1
     
  3. 00christian00

    00christian00

    Joined:
    Jul 22, 2012
    Posts:
    1,008
    @Tak
    Does DOTS audio allow to write to the audio buffer without allocating any memory like onAudioFilterRead does since Unity 5.5 roughly(was fine before).
    I have been told it's because you attach and detach the mono domain on every read, which allocates GC memory.
    If yes could we have a simple example of how to write on the output buffer without using onAudioFilterRead ?
     
  4. jasonatkaruna

    jasonatkaruna

    Joined:
    Feb 26, 2019
    Posts:
    64
    @Tak
    I'm writing a basic midi synth. I'm experiencing a noticeable amount of latency when I Play/Pause using a CommandBlock.UpdateAudioKernel call, which can be difficult to adjust for when using a midi controller. What sorts of things should I be looking for to reduce that if possible?
     
  5. Tak

    Tak

    Unity Technologies

    Joined:
    Mar 8, 2010
    Posts:
    1,001
    Yes. There are some simple examples embedded in the com.unity.audio.dspgraph package, including writing procedurally-generated samples to the default output stream and playing a clip.
     
    ldewet-ct likes this.
  6. Tak

    Tak

    Unity Technologies

    Joined:
    Mar 8, 2010
    Posts:
    1,001
    You could try controlling play/pause via a parameter on the kernel instead, and/or reducing the dsp buffer size for the graph.
    Today, there will always be up to DSPBufferSize samples of latency, because command blocks are applied between mixes, and UpdateAudioKernel actually executes an additional job (the update job you supply).
     
    jasonatkaruna likes this.
  7. 00christian00

    00christian00

    Joined:
    Jul 22, 2012
    Posts:
    1,008
    @Tak
    Thanks! The ScheduleParameter example seem quite straightforward.
    I don't understand the purpose of the DSPCommandBlock however.
    Since you can recreate it every time from the graph why expose it? Wouldn't it be better to just hide it in the API?
    Or are there scenarios when it's useful to store it?
    Are you supposed to NOT store it, because it could change at runtime?
     
  8. Tak

    Tak

    Unity Technologies

    Joined:
    Mar 8, 2010
    Posts:
    1,001
    DSPCommandBlock is for gathering groups of changes together to be applied atomically
     
    00christian00 likes this.
  9. 00christian00

    00christian00

    Joined:
    Jul 22, 2012
    Posts:
    1,008
    Thanks! It's clear now.
    Is there a way to expose the output buffer outside the AudioKernel?
    I have a quite complex system that rely on several monobehaviour right now and it would be quite lengthy to convert everything to the new system.
    Right now I am using MonoPInvokeCallback and IntPtr to share the buffer between native code and managed-
    Is there an equivalent for DOTS?
     
  10. Tak

    Tak

    Unity Technologies

    Joined:
    Mar 8, 2010
    Posts:
    1,001
    Sorry for the long delay.
    If I understand what you're asking, then not really.
    However, you can implement only part of your system as a graph and drive it manually (calling BeginMix/ReadMix yourself), if that makes sense for your use case.
     
    00christian00 likes this.
  11. bashis

    bashis

    Joined:
    Mar 18, 2013
    Posts:
    7
    Is there any info on when any documentation is going to be available for DSP Graph? I mean, this has been anounced over a year ago and there are still no signs of it on Unity roadmap. Can we at least expect anything stable in terms of API this year?
     
  12. aksyr

    aksyr

    Joined:
    Sep 19, 2013
    Posts:
    1
    I have made modular synth using modified DSP Graph 0.1.0-preview.11. Basically this modification allowes for not connected inputs to be recognized as connected (buffers are filled with zeros), because I found DSP Graph handling of connections inside kernels very annoying. With some minor modifications this could be used with standard DSP Graph.

    This project also contains very hackish/naive implementation of microphone input in graph (someone asked about that earlier).

    Here it is: https://github.com/aksyr/Unity-DSP-Graph-Modular-Synth
     
    Last edited: Feb 5, 2020
  13. thelebaron

    thelebaron

    Joined:
    Jun 2, 2013
    Posts:
    822
    Will the less low level part of dots audio be a part of this package or another package entirely?
     
  14. hamokshaelzaki

    hamokshaelzaki

    Joined:
    Nov 6, 2012
    Posts:
    19
    Guys, I'm lost, how to play a collision audio clip on 2 objects collision, in a simple pool game?
     
  15. Tak

    Tak

    Unity Technologies

    Joined:
    Mar 8, 2010
    Posts:
    1,001
    The current plan is that there will be a separate DOTS Audio package
     
    deus0 and thelebaron like this.
  16. Tak

    Tak

    Unity Technologies

    Joined:
    Mar 8, 2010
    Posts:
    1,001
    You might be in the wrong place, this thread is about low-level, experimental, upcoming audio support for DOTS :)
     
    deus0 likes this.
  17. SebLazyWizard

    SebLazyWizard

    Joined:
    Jun 15, 2018
    Posts:
    219
    Is there any news on the ETA of that package?
     
    Games4Stream likes this.
  18. Tak

    Tak

    Unity Technologies

    Joined:
    Mar 8, 2010
    Posts:
    1,001
    Sorry, we don't have any news to announce right now.
     
  19. Mockarutan

    Mockarutan

    Joined:
    May 22, 2011
    Posts:
    158
    What is the general purpose of having multiple SampleProviders in one node? I Assume mixing is done by feeding multiple nodes into another node, right? So why feed more than one sound source into a node via SampleProviders? What am I missing?
     
  20. W4ru

    W4ru

    Joined:
    Oct 24, 2019
    Posts:
    2
    Is it possible to load an AudioClip from the persistentDataPath and use it as a SampleProvider?
    When I try to do that, I get this error message:


    ArgumentException: AudioClip.GetAudioSampleProviderId can only be used with AudioClips that represent persistent assets.
    Unity.Audio.DSPCommandBlock.SetSampleProvider[TParameters,TProviders,TAudioKernel] (UnityEngine.AudioClip clip, Unity.Audio.DSPNode node, TProviders item, System.Int32 index, System.Int64 startSampleFrameIndex, System.Int64 endSampleFrameIndex, System.Boolean loop, System.Boolean enableSilencePadding) (at Library/PackageCache/com.unity.audio.dspgraph@0.1.0-preview.11/Runtime/DSPCommandBlock.cs:641)
     
  21. janm_unity3d

    janm_unity3d

    Unity Technologies

    Joined:
    Jun 12, 2012
    Posts:
    36
    Hi W4ru, unfortunately this is not possible. Any functionality that could cause side-effects with AudioClips in the sense that an audio clip might concurrently read and thus modify the state (WWW/DownloadRequestHandler, AudioClip.Create) is not available for SampleProviders. Only disk-based assets whose data is read-only allows such shared access. Sounds streamed from WWW don't fall into this category, because the playback is directly coupled with the buffering of the compressed audio data.
     
  22. W4ru

    W4ru

    Joined:
    Oct 24, 2019
    Posts:
    2
    Hi janm, Thanks for the quick reply!
    The game downloads an audio file from a server to the computer file-system. Do you think there is any way to play this file using dspgraph?
     
  23. deus0

    deus0

    Joined:
    May 12, 2015
    Posts:
    256
    I am really looking forward to that lower level audio package. I have all my systems using ECS now, and I'd like to add a lot of audio to different entities. I could use jobs to process new audio, and perhaps a component system to push the raw audio data into the audio kernel? (or could we do this in a job system?)
    If I have thousands of characters in my scene, all making noise, I wonder if it's possible to transform these sounds by 3D location and add them together into one audio channel? or perhaps I'm thinking of it wrong.
    I've written procedural algorithms before for audio generation, but I'd like to make my characters be able to speak at the same time in the most performant way.
     
    florianhanke likes this.
  24. Nifflas

    Nifflas

    Joined:
    Jun 13, 2013
    Posts:
    118
    DOTS question. Currently, I'm using my own music software that runs in its own thread and schedules events to run on the DSP times. For thread safety, I just use regular locks. It's important that I don't run the music logic on the main thread, because since I'm triggering every note individually and with a low latency, I can't afford the music logic to suffer if I e.g. temporarily get a bad framerate.

    For the audio side, I'm using OnAudioFilterRead and pass the array to a C++ plugin, but I was hoping to move the DSP logic from C++ to burst compiled C#.

    So, as for my question, with DOTS audio, will I similarily be able to schedule events I want to happen on exact DSP times from anywhere other than the main thread?
     
    Last edited: Apr 14, 2020
    deus0 likes this.
  25. zollenz

    zollenz

    Unity Technologies

    Joined:
    Jun 4, 2019
    Posts:
    19
    It should be possible to port your system to C# using DOTS and the current version of DSP Graph.

    Events like node creation/destruction, parameter changes etc. are queued up in a DSPCommandBlock.
    These batched events (commands) are then executed on the audio thread during each audio rendering pass.

    This scheduling can be done from any thread.
    Using DOTS, it is perfectly fine to call the DSPCommandBlock API from a job.
    Some DSPGraph API calls can not be compiled by Burst, but that is more of an optimisation issue.


    Sample-accurate scheduling can be done using the AddAttenuationKey and AddFloatKey calls.

    https://docs.unity3d.com/Packages/c.../api/Unity.Audio.DSPCommandBlock.html#methods
     
  26. zollenz

    zollenz

    Unity Technologies

    Joined:
    Jun 4, 2019
    Posts:
    19
    Are you talking about processing raw audio data or just parameters (e.g. volume, pan, DSP-specific parameters) in ECS? If it's the former, I would do that in DSP Graph since that's what it's made for. If it's the latter, you could aggregate the parameters of many 'virtual sound emitters' in the ECS domain (e.g. your characters) into one 'concrete sound emitter' in the audio rendering domain (e.g. a DSP Graph node). This is sort of the equivalent of bouncing tracks down to one in a DAW to reduce CPU load. In fact, this is the basis of the sound field implementation in the MegaCity demo.

    Also, consider if having thousands of things emitting sound is actually desirable re: the resulting aural aesthetics. You might be adding a lot of redundant processing for something that in the end sounds muddy/cacophonous.

    Usually you would want a limit on how many real voices are playing at any given time. Voices that exceed this number or are actually inaudible will be virtualised (i.e. will be excluded from most per-frame processing). The coming DOTS audio system that we are building on top of DSP Graph will include virtualisation, just like the current audio system.
     
    deus0 and florianhanke like this.
  27. Nifflas

    Nifflas

    Joined:
    Jun 13, 2013
    Posts:
    118
    Quick folow-up question.

    1: Can it be called from threads that do not use the job system?
    2: If I implement the music logic as a job, is there a way to run said job framerate-independent continuously at, say, 100 times per second, so it doesn't take a hit even if the main thread / rendering FPS does? I've had trouble googling how to do this.
     
    deus0 likes this.
  28. SharkmanSam

    SharkmanSam

    Joined:
    Aug 2, 2013
    Posts:
    4
    Hate to be that guy but man... am I disappointed with Unity Audio. It's just been a train wreck for many many years. Poor to no implementation, lack of tutorials for advanced users and stunted improvements. What gives? Been waiting since Unity 5 release for proper audio coding features. DSPgraph & DOTS seems promising but it's currently a mess.

    Please make tutorials for basic usage and advanced usage! Whatever examples are out there have been too basic, with inconsistent implementations (lots of ways to do same thing) and most projects seem to crash.

    We could use a video proper setup of DSPgraph, adding nodes, connections and removing them along with the profiler visualizer etc.

    We need more thank you.
     
  29. zollenz

    zollenz

    Unity Technologies

    Joined:
    Jun 4, 2019
    Posts:
    19
    1. Yes. You don't have to use DOTS to use DSP Graph.
    2. In DOTS, jobs are scheduled from the main thread and the main thread does not proceed to the next frame until all jobs scheduled within that frame have finished. Job execution can not span multiple frames. This means that your scheduling would be sensitive to the main thread stalling which is undesirable in your case. It would be better to implement the timing-sensitive logic as jobs in DSP Graph because while it uses the same job system as DOTS, job scheduling is done from the audio thread.
     
    Last edited: May 18, 2020
    deus0 likes this.
  30. JamesWjRose

    JamesWjRose

    Joined:
    Apr 13, 2017
    Posts:
    661
    Any news on when a more high level DOTS audio is coming? DSP Graph seems awesome, but overkill for just having an audio clip play on a NPC. eg: I have a city race game and it would be nice to add audio to each car in the city.
     
    NotaNaN, Hyp-X and Srokaaa like this.
  31. clintaki

    clintaki

    Joined:
    May 13, 2019
    Posts:
    14
    Let me start by saying that I can see some amazing potential here. This is going to lead to some great audio in Unity eventually.

    Let me tell you what I did that isn't working out for me and maybe someone can help me start getting things sorted out. I will then go over what I want to do. I don't think my desires are going to be all that uncommon.

    What I tried
    I started with the PlayClip sample like I think a lot of people will. It was a bit confusing because of how it was centered around a GameObject. It should be converted to full ECS. Aside from that, it would probably work fine for most, but I have the dynamically loaded samples problem mentioned above. This happened because I am focusing heavily on modding in the current game. I am fairly new to Unity, but I found out pretty quickly that Unity doesn't give you a great way to work with dynamically loaded audio. This is an existing problem, why should DOTS audio be the one to solve it.

    So, I went a different route than most probably would. Wave files are not complicated, I just loaded them up as NativeArray<float> and put some types together with metadata. I even went as far as resampling them up front. I can see the value in being able to randomize the pitch a little bit to give some uniqueness, but I don't need it right now. A lot of the code in PlayClip is centered around converting to stereo and resampling on the fly. I have not tested writing the files back out to verify that my code doesn't have issues with the process, but it's not super complex, so I think I got it. I could definitely have bugs in this code that I should try to shake out, but I don't think that is the issue.
    Edit: I have written out the wave files and they are resampled and converted to stereo properly. I did have to write them as IEEE float (format code 3) though. This is strange since I have no logic to convert from PCM on the initial read.

    I have a reader type that I spin off of this that keeps up with the offset and works with a NativeSlice. I get this to my node in my version of StartPlayingClip. My node is really simple, because the reader type handles most of the complexities of filling the buffer. I could totally have bugs here as well.

    I then have types similar to how the mega city demo works for firing starting the audio nodes. I used the system state and all that. I have a queue to try to recycle them so I'm not constantly connecting and disconnecting nodes.

    What happens
    1) My audio actually plays, but every couple seconds I get a stutter. This is most obvious for music. I have had this issue since I first converted PlayClip over to using a NativeArray<float>. At the beginning of my scene, I am just playing 1 node (music).

    Fixed: Use Incremental GC https://blogs.unity3d.com/2018/11/26/feature-preview-incremental-garbage-collection/

    2) After playing in editor, I randomly get crashes. I think this could be related to how I allocate my native arrays. Another issue might be related to disposing my nodes. I have tried both the audio kernel and persistent allocations. I have tried various approaches and I have leaned toward the DOTS warnings when it cleans things up vs. trying to dispose things properly. It seems like the native arrays that make their way to my nodes get disposed when the node gets disposed, but I am not sure on this. I think developers in general need really clear guidance on this.

    The latest one seems to be here in DSPNodeImplementation:
    Code (CSharp):
    1.         internal void DeallocateJobData()
    2.         {
    3.             if (*m_JobDataAllocationRoot == 0)
    4.                 return;
    5.  
    6.             Utility.FreeUnsafe((void*)*m_JobDataAllocationRoot);
    7.             *m_JobDataAllocationRoot = 0;
    8.         }
    9.  
    Another point about this, I had some trouble figuring out how to deal with disposing nodes from different systems because OnDestroy gets called in different orders. I can code around this, but it's another reason that maybe just letting the graph teardown handle it is the way to go. Example: AudioSystem with the graph and MusicSystem with a node.

    3) My teammate can't load the project. He gets crashes on Unity project load. I have a really stout developer machine (watercooled 16 core etc.). He has a fairly outdated standard gaming rig that I'm not exactly sure on the specs. I am going to have him check the crash logs now that I know about that.

    4) I found out that DOTS audio isn't going to magically solve the train-wreck that is me trying play a bunch of samples at once. This is basically the mega-city problem, but with a twist. A lot of my sounds are not looping. I need guidance on what I should do for this. Various approaches have crossed my mind.

    a) My first thought was to let DOTS handle aggregating this information and send it all to a single node per sample. This seems like it would fail because it's not really stepping through the buffers and dsptime properly.
    b) I also thought about having a primary node like the solution above, but also having it connect a node per source and then having it somehow only process the first x inputs. This seems like it would fail because it would need to advance the buffer and there is no way to do a simpler version of the processing on the input node.
    c) Another approach was to just ignore samples after x of the same category of sample are already playing. This seems like it would fail if I didn't eventually play the sample for scenarios where the sample is looping. Example... 6 rockets fired, only 4 playing. The 4 playing blow up, but the last 2 are not making any noise.​

    Summary
    I can definitely empathize with SharkmanSam right now while seeing that we are on the cusp of something really useful. Mostly, I need guidance.
     
    Last edited: May 14, 2020
    OneAndOneIsTwo likes this.
  32. l33t_P4j33t

    l33t_P4j33t

    Joined:
    Jul 29, 2019
    Posts:
    232
    What's the best way of just playing simple clips at certain positions in dots?

    so far i've seen three different implementations of audio in dots, unity.tiny.audio, audio in the multiplayer sample, and dsp graph.

    DSP graph doesn't use the audio listener and there is no sample with varying position, so i'm not sure how to get it to play a sound at a certain coordinate
     
    deus0 likes this.
  33. florianhanke

    florianhanke

    Joined:
    Jun 8, 2018
    Posts:
    426
    @l33t_P4j33t Where can I find this example? Thanks!
     
    Last edited: May 10, 2020
  34. Hyp-X

    Hyp-X

    Joined:
    Jun 24, 2015
    Posts:
    421
    You can import the sample from the Package Manager if you select the DSPGraph package.
     
    florianhanke likes this.
  35. florianhanke

    florianhanke

    Joined:
    Jun 8, 2018
    Posts:
    426
    Thank you very much!

    @l33t_P4j33t If I remember correctly, the Megacity project contains Attenuation-related code.
     
  36. clintaki

    clintaki

    Joined:
    May 13, 2019
    Posts:
    14
    I started extracting out my code into a blank project to open source for people to play around with, and it doesn't have any of the issues I run into with my main project. I will probably have it ready in the next day or so for people to play around with. The sprite renderer floating around here helped me get things going pretty well on that side.

    Could the graph be stuttering while other things are going on?

    Edit: https://gitlab.com/clint-simulize/dots-audio-sample

    Edit 2: Calling GC.Collect() will stutter every time. Is there something I need to do to protect the DSPGraph from GC stalls?

    Edit 3: Using incremental GC fixes it! Victory is mine! https://blogs.unity3d.com/2018/11/26/feature-preview-incremental-garbage-collection/
     
    Last edited: May 14, 2020
  37. florianhanke

    florianhanke

    Joined:
    Jun 8, 2018
    Posts:
    426
    Yesterday, I put together a simple spatialization audio system – relative to the AudioListener (with stereo delay (left/right), attenuation (distance), and lowpass filter (distance)). Code is here: https://gist.github.com/floere/be4b9bd309d586dc43cb865493e076ce (code taken from the DSPGraph samples is marked as such).

    I was really surprised how easy it is to put together and how well even this draft works.

    One thing I was wondering about:
    I am using a node pool so I can just reuse the nodes. Before playing a clip I need to get a free playclip node and update the other nodes between it and the root DSP node. The way I am doing it is by remembering mappings from playclip node to spatializer node (for the stereo 3d effect)/lowpass node (for the lowpass)/lowpass-root connection (for the attenuation), and then via these mappings get the nodes/connection and updating them.
    Is that the way to go?

    Here is my structural setup. The PlayClip nodes are either stored in the playingNodes or freeNodes list.
    Code (CSharp):
    1. // ┌──────────────────────────────┐   ┌──────────────────────────────┐        
    2. // │         playingNodes         │   │          freeNodes           │        
    3. // └──────────────────────────────┘   └──────────────────────────────┘        
    4. //                 │                                  │                        
    5. //         ┌──────── ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─                        
    6. //         │                                                                  
    7. //         ▼                                                                  
    8. // ┌──────────────┐     ┌──────────────┐     ┌──────────────┐     ┌──────────────┐
    9. // │              │     │              │     │              │     │              │
    10. // │   PlayClip   │────▶│ Spatializer  │────▶│   Lowpass    │────▶│     Root    │
    11. // │              │     │              │     │              │     │              │
    12. // └──────────────┘     └──────────────┘     └──────────────┘     └──────────────┘
    13. //         │                    ▲                    ▲                        
    14. //                              │                                              
    15. //         └ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┘                        
    16. //          clipToSpatializerMap   clipToLowpassMap
     
    Last edited: May 13, 2020
    eugolana, Nothke, deus0 and 2 others like this.
  38. l33t_P4j33t

    l33t_P4j33t

    Joined:
    Jul 29, 2019
    Posts:
    232
    incredible
    it doesn't look easy at all
    works great
    only caveat is there's no way to change location of sound after it started playing
     
    Last edited: May 13, 2020
  39. l33t_P4j33t

    l33t_P4j33t

    Joined:
    Jul 29, 2019
    Posts:
    232

    so how would you go about updating nodes to account for player position? i do not at all understand the dsgp graph api.. and so too the dots animation and data flow api. thinking it would work if were to put this function

    Code (CSharp):
    1. public void RotateListener(float3 eulerRotationDelta) {
    2.     using (DSPCommandBlock block = graph.CreateCommandBlock()) {
    3.         foreach (var node in playingNodes) {
    4.             block.CreateUpdateRequest<PlayClipKernelUpdate,
    5.                 PlayClipKernel.Parameters,
    6.                 PlayClipKernel.SampleProviders,
    7.                 PlayClipKernel>(???, node,
    8.                 callback => {
    9.                        ???
    10.                        // do something to each node based on rotation
    11.             });
    12.         }
    13.     }
    14. }
    in AudioSystem.cs and call it when the camera rotates, but not sure what to put in the action delegate and what the first CreateUpdateRequest function input is supposed to be
     
    Last edited: May 14, 2020
    florianhanke likes this.
  40. clintaki

    clintaki

    Joined:
    May 13, 2019
    Posts:
    14
    I did a node pool too thinking that connect/disconnect in high frequency would be more of a performance hit than processing dead nodes. Are we right in thinking this or should we handle it more like we would with entities where pooling is even counterproductive?

    Edit: Just realized I am actually connecting and disconnecting as they start/stop.
     
    Last edited: May 14, 2020
    florianhanke likes this.
  41. clintaki

    clintaki

    Joined:
    May 13, 2019
    Posts:
    14
    Code (CSharp):
    1.  
    2.                 for (int i = 0; i < connections.Count; i++)
    3.                 {
    4.                     block.Disconnect(connections[i]);
    5.                 }
    6.  
    7.                 for (int i = 0; i < freeNodes.Count; i++)
    8.                 {
    9.                     block.ReleaseDSPNode(freeNodes[i]);
    10.                 }
    11.  
    I had the same thing disconnecting and then calling ReleaseDSPNode. Navigating through that, I see that it lands at DSPGraph.ScheduleDSPNodeDisposal which seems to disconnect the inputs and outputs and then queue up the node for disposal on the next call to DSPGraph.Update.

    DSPNode.Dispose(DSPGraph graph) also disconnects the inputs and outputs. I had a lot of problems with DeallocateJobData bombing when I called it previously. My initial read is that we should not be calling DSPNode.Dispose.

    Do we need to call DSPGraph.Update to get it to dispose all these queued nodes before we dispose it? I can't figure out if the graph disposal does that because of the trampoline.

    Edit: Found it! It does call it.

    Code (CSharp):
    1.         [MonoPInvokeCallback(typeof(Trampoline))]
    2.         internal static void DoDispose(ref DSPGraph graph)
    3.         {
    4.             try
    5.             {
    6.                 graph.Dispose(DisposeBehavior.RunDisposeJobs);
    7.             }
    8.             catch (Exception exception)
    9.             {
    10.                 // Don't throw exceptions back to burst
    11.                 Debug.LogException(exception);
    12.             }
    13.         }
    14.  
    15.         internal void Dispose(DisposeBehavior disposeBehavior)
    16.         {
    17.             if (disposeBehavior == DisposeBehavior.RunDisposeJobs)
    18.             {
    19.                 // Execute pending command blocks
    20.                 ApplyScheduledCommands();
    21.  
    22.                 // Execute callbacks from pending command blocks
    23.                 Update();
    24.  
    25.                 // Callbacks may schedule one more round of commands
    26.                 ApplyScheduledCommands();
    27.             }
    28.  
    29.             CleanupRemainingNodes(disposeBehavior);
    30.             InternalDispose(disposeBehavior);
    31.         }
    32.  
     
    Last edited: May 14, 2020
    florianhanke likes this.
  42. clintaki

    clintaki

    Joined:
    May 13, 2019
    Posts:
    14
    Could you use a separate update kernel for that purpose?
     
  43. florianhanke

    florianhanke

    Joined:
    Jun 8, 2018
    Posts:
    426
    Thanks! Ah, I meant "easy" as compared to writing low level audio in other languages – I wasn't clear!

    There are many other caveats: no doppler effect, no realistic filtering based on distance and location, no 7.1, … :) Just what I put together in half a day. I was hoping somebody could use it and build on top of it, or let me know that I could write it in a better way.

    Regarding the doppler effect, I just found what I meant above regarding attenuation in the MegaCity example:
    Code (CSharp):
    1. using System.Collections;
    2. using System.Collections.Generic;
    3. using UnityEngine;
    4.  
    5. public class DelayLineDopplerHack : MonoBehaviour
    6. {
    7.     public AudioListener m_Listener;
    8.  
    9.     public GameObject m_Proto;
    10.  
    11.     public int m_NumSources = 30;
    12.  
    13.     public float m_MaxDistance = 1.0f;
    14.     public float m_Haas = 2000.0f;
    15.     public float m_EnvelopeSpeed = 2.0f;
    16.     public float m_PanAmount = 2.0f;
    17.     public float m_Radius = 500.0f;
    18.     public float m_SwitchTimeMin = 0.01f;
    19.     public float m_SwitchTimeMax = 4.0f;
    20.     public float m_DopplerLevel = 1.0f;
    21.  
    22.     struct SourceData
    23.     {
    24.         public GameObject m_Object;
    25.         public float[] m_Attenuation;
    26.         public float m_Distance;
    27.         public float m_InterpolatedDistance;
    28.         public float[] m_InterpolatedAttenuation;
    29.         public Vector3 m_Follow;
    30.         public Vector3 m_Target;
    31.         public float m_TimeUntilNextUpdate;
    32.         public float m_MoveTime;
    33.     }
    34.  
    35.     SourceData[] m_SourceData;
    36.     float[] m_Delay = new float[0x40000];
    37.     int m_WritePos;
    38.     int m_SampleRate;
    39.     bool m_Ready;
    40.  
    41.     // Start is called before the first frame update
    42.     void Start()
    43.     {
    44.         m_SampleRate = 44100;
    45.         m_SourceData = new SourceData[m_NumSources];
    46.     }
    47.  
    48.     System.Random r = new System.Random();
    49.  
    50.     // Update is called once per frame
    51.     void Update()
    52.     {
    53.         if (!m_Ready)
    54.         {
    55.             for (int i = 0; i < m_SourceData.Length; i++)
    56.             {
    57.                 var s = m_SourceData[i];
    58.                 s.m_Object = Object.Instantiate(m_Proto);
    59.                 s.m_Attenuation = new float[2];
    60.                 s.m_InterpolatedAttenuation = new float[2];
    61.                 m_SourceData[i] = s;
    62.             }
    63.  
    64.             m_Proto.SetActive(false);
    65.         }
    66.  
    67.         for (int i = 0; i < m_SourceData.Length; i++)
    68.         {
    69.             var s = m_SourceData[i];
    70.             s.m_TimeUntilNextUpdate -= Time.deltaTime;
    71.  
    72.             if (s.m_TimeUntilNextUpdate <= 0.0f)
    73.             {
    74.                 s.m_MoveTime = m_SwitchTimeMin + (m_SwitchTimeMax - m_SwitchTimeMin) * (float)r.NextDouble();
    75.                 s.m_TimeUntilNextUpdate += s.m_MoveTime;
    76.  
    77.                 var x = ((float)r.NextDouble() * 2.0f - 1.0f) * m_Radius;
    78.                 var y = ((float)r.NextDouble() * 2.0f - 1.0f) * m_Radius;
    79.                 var z = ((float)r.NextDouble() * 2.0f - 1.0f) * m_Radius;
    80.                 s.m_Target = new Vector3(x, y, z);
    81.                 if (!m_Ready)
    82.                 {
    83.                     s.m_Follow = s.m_Target;
    84.                     s.m_Object.transform.position = s.m_Target;
    85.                     x = ((float)r.NextDouble() * 2.0f - 1.0f) * m_Radius;
    86.                     y = ((float)r.NextDouble() * 2.0f - 1.0f) * m_Radius;
    87.                     z = ((float)r.NextDouble() * 2.0f - 1.0f) * m_Radius;
    88.                     s.m_Target = new Vector3(x, y, z);
    89.                 }
    90.             }
    91.  
    92.             m_SourceData[i] = s;
    93.         }
    94.  
    95.         float distanceScale = 1.0f / m_MaxDistance;
    96.  
    97.         for (int i = 0; i < m_SourceData.Length; i++)
    98.         {
    99.             var target = m_SourceData[r.Next(m_SourceData.Length - 1)].m_Object.transform.position;
    100.             var s = m_SourceData[i];
    101.             float moveSpeed = 1.0f - Mathf.Pow(0.1f, Time.deltaTime / s.m_MoveTime);
    102.             s.m_Follow += (s.m_Target - s.m_Follow) * moveSpeed;
    103.             var delta = (s.m_Follow - s.m_Object.transform.position) * moveSpeed;
    104.             s.m_Object.transform.position += delta;
    105.             s.m_Object.transform.rotation = Quaternion.LookRotation(-delta.normalized);
    106.             var dir = m_Listener.transform.worldToLocalMatrix.MultiplyVector(s.m_Object.transform.position);
    107.             var len = dir.magnitude;
    108.             var leftDir = 0.5f * dir.z - dir.x;
    109.             var rightDir = 0.5f * dir.z + dir.x;
    110.             var attenuation = 1.0f / (1.0f + s.m_Distance * distanceScale);
    111.             s.m_Attenuation[0] = attenuation * (0.65f + m_PanAmount * 0.45f * Mathf.Clamp(leftDir * Mathf.Abs(leftDir) / (len + 0.001f), -1.0f, 1.0f));
    112.             s.m_Attenuation[1] = attenuation * (0.65f + m_PanAmount * 0.45f * Mathf.Clamp(rightDir * Mathf.Abs(rightDir) / (len + 0.001f), -1.0f, 1.0f));
    113.             s.m_Distance = (m_Listener.transform.position - s.m_Object.transform.position).magnitude;
    114.             m_SourceData[i] = s;
    115.         }
    116.  
    117.         if (!m_Ready)
    118.         {
    119.             for (int i = 0; i < m_SourceData.Length; i++)
    120.             {
    121.                 var s = m_SourceData[i];
    122.                 s.m_InterpolatedDistance = s.m_Distance;
    123.                 m_SourceData[i] = s;
    124.             }
    125.             m_Ready = true;
    126.         }
    127.     }
    128.  
    129.     void OnAudioFilterRead(float[] data, int numChannels)
    130.     {
    131.         if (!m_Ready)
    132.         {
    133.             for (int n = 0; n < data.Length; n++)
    134.                 data[n] = 0.0f;
    135.             return;
    136.         }
    137.  
    138.         float envelopeSpeed = 1.0f - Mathf.Pow(0.001f, 1.0f / (m_EnvelopeSpeed * m_SampleRate));
    139.  
    140.         int maxLength = (int)(m_Delay.Length / numChannels) - 1;
    141.  
    142.         float dopplerSamples = m_DopplerLevel * m_SampleRate / 340.0f;
    143.  
    144.         for (int n = 0; n < data.Length; n += numChannels)
    145.         {
    146.             for (int c = 0; c < numChannels; c++)
    147.             {
    148.                 m_Delay[m_WritePos] = data[n + c];
    149.                 if (++m_WritePos == m_Delay.Length)
    150.                     m_WritePos = 0;
    151.                 data[n + c] = 0.0f;
    152.             }
    153.  
    154.             for (int i = 0; i < m_SourceData.Length; i++)
    155.             {
    156.                 var s = m_SourceData[i];
    157.  
    158.                 s.m_InterpolatedDistance += (s.m_Distance - s.m_InterpolatedDistance) * envelopeSpeed;
    159.  
    160.                 int delaySamplesBase = (int)(s.m_InterpolatedDistance * dopplerSamples);
    161.  
    162.                 for (int c = 0; c < numChannels; c++)
    163.                 {
    164.                     var haasDelay = (int)((s.m_Attenuation[0] - s.m_Attenuation[1]) * m_Haas * (c * 2 - 1));
    165.                     var delaySamples = Mathf.Clamp (delaySamplesBase + haasDelay, 0, maxLength);
    166.  
    167.                     int readPos = m_WritePos - numChannels - delaySamples * numChannels;
    168.                     if (readPos < 0)
    169.                         readPos += m_Delay.Length;
    170.  
    171.                     s.m_InterpolatedAttenuation[c] += (s.m_Attenuation[c] - s.m_InterpolatedAttenuation[c]) * envelopeSpeed;
    172.                     data[n + c] += m_Delay[readPos] * s.m_InterpolatedAttenuation[c];
    173.                     if (++readPos >= m_Delay.Length)
    174.                         readPos = 0;
    175.                 }
    176.  
    177.                 m_SourceData[i] = s;
    178.             }
    179.         }
    180.     }
    181. }
    182.  

    This all sounds like it handles the doppler effect (by remembering the last position and calculating the relative speed) and attenuation of passing sounds, albeit as a MonoBehaviour.
     
    Last edited: May 14, 2020
  44. florianhanke

    florianhanke

    Joined:
    Jun 8, 2018
    Posts:
    426
    Whether to pool or not – and whether to remember references to all the nodes/connections that we want to update is exactly what I am also wondering about. In my example, I pool, and only disconnect/dispose when the system is destroyed.

    P.S: Is there a reason that people only (appear to) comment on this thread re DSPGraph or is it ok to open new threads?
     
  45. florianhanke

    florianhanke

    Joined:
    Jun 8, 2018
    Posts:
    426
    A simple idea: Remember a mapping of a specific OneShot sound to a node in the AudioSystem. Then, move the node updating sound from playOneShot into an updateOneShot and pass it the node and the relativeTranslation, then get the node and update spatializerNode, lowpassFilterNode, connection.

    When the sound is finished, the AudioSystem will have to remove the one shot - node mapping.

    This will not handle the doppler effect, but it will reposition the sound and recalculate all related parameters. (I'd also extract the recalculation into a separate class, to keep it clean)
     
  46. imaginadio

    imaginadio

    Joined:
    Apr 5, 2020
    Posts:
    50
    Hello guys!
    I make small 2D game on pure ecs, and i want to play sounds from my entities.
    Is there any easy-to-understand example (with as little code as possible) to playOneShot sounds in wav format from my entities?
    I found here some examples, but it too complicated for me:
     
  47. florianhanke

    florianhanke

    Joined:
    Jun 8, 2018
    Posts:
    426
    You can use
    AudioSource.PlayOneShot
    normally :)
     
  48. imaginadio

    imaginadio

    Joined:
    Apr 5, 2020
    Posts:
    50
    Sorry for my noob question.. im new in ECS, but how to pass AudioClip inside burst compiled job?
    I cannot make component with member AudioClip..
     
  49. bupkinfetch662

    bupkinfetch662

    Joined:
    May 17, 2020
    Posts:
    14
    you can make a blob asset reference to an audio clip
    project tiny might benefit you if you're using urp, they have a much simpler audio system, more on the level of audiosource.play()
     
    imaginadio likes this.
  50. bupkinfetch662

    bupkinfetch662

    Joined:
    May 17, 2020
    Posts:
    14
    is there a reason why dsp graph wasn't built on top of the data flow graph framework?
    Couldn't dsp nodes inherit from node definition rather than IAudioKernel?
    instead of DSPCommandBlock.connect, you use NodeSet.connect
    instead of DSPCommandBlock.setdata, you use NodeSet.sendmessage / NodeSet.setdata

    data flow seems precisely like the tool built for this job
     
    Last edited: May 18, 2020