Search Unity

DOTS Audio Discussion

Discussion in 'Data Oriented Technology Stack' started by harini, Mar 28, 2019.

  1. harini

    harini

    Unity Technologies

    Joined:
    May 28, 2015
    Posts:
    11
    Hi everyone,

    As requested by you, we are starting a dedicated space to talk about our new DOTS Audio system and specifically DSPGraph that a lot of you have been exploring through the Megacity demo. We would like to use this thread as a means of getting feedback, requirements, and having candid discussions about your audio needs. Please feel free to post any requests, concerns, or issues you have with the new DOTS-based Audio system in this thread.

    Also look out in this space for new announcements and updates as we go forward building our new DOTS based Audio system in Unity!

    You can check out more about our design philosophy and approach with Wayne's Unite LA talk about DOTS Audio and DSPGraph.


    Please do note that DSPGraph is still under development and we are re-working our APIs. Please do expect things to change and consider this as a very early version for you all to play with!
     
    NikH, 5argon, psuong and 6 others like this.
  2. eizenhorn

    eizenhorn

    Joined:
    Oct 17, 2016
    Posts:
    1,279
    When we can expect basic documentation and DOTS Audio as package? :)
     
  3. vertxxyz

    vertxxyz

    Joined:
    Oct 29, 2014
    Posts:
    35
    From what I understand this system generates a graph similar to the way the Playable graph works, and then there's a visualiser. Are there plans for making it a system like Shader/VFX Graph, where the authoring can happen in a UI and not just code? I also wonder why the UI design here is disparate compared to the other tools.
     
    Alverik likes this.
  4. Tak

    Tak

    Unity Technologies

    Joined:
    Mar 8, 2010
    Posts:
    945
    We're targeting Unity 2019.2 for the first preview package of DSPGraph, the core of what will be DOTS audio.
     
    deus0, Jes28, Nothke and 2 others like this.
  5. Soaryn

    Soaryn

    Joined:
    Apr 17, 2015
    Posts:
    173
    Curiously, with this system, would we be able yo choose sound devices to input output to and from? Or would we still be ljmited to Unity's current system for that? Being able to provide mic inputs via direct DOTS integration would be neat.

    I imagine having a few mic inputs and potentially 4 output sources providing different tracks (music, voip, sound fx)
     
  6. Hyp-X

    Hyp-X

    Joined:
    Jun 24, 2015
    Posts:
    121
    So I post my question here because it's a better place:
    I see that an IAudioJob containing ExecuteContext.PostEvent doesn't compile with Burst.
    Will this be supported by Burst at some point or will the API change?

    I see the Megacity demo avoids this API even though it's part of the presentation.

    I can workaround this with a 1 element array
    Code (CSharp):
    1.     [NativeDisableContainerSafetyRestriction]
    2.     public NativeArray<bool> voiceCompleted;
    3.  
    and polling that bool in the main thread, but events look more clean.
     
  7. janusl

    janusl

    Unity Technologies

    Joined:
    Aug 8, 2018
    Posts:
    8
    Yes (no details yet)

    As you mentioned yourself, there's a difference between visualizers and authoring tools. GraphView (the UI framework Shader/VFX Graph is built on) is an authoring tool, not really a read-only real time visualization. This was just a prototype though, so it will align in the end :)

    Yes, it will be supported soon.

    We're working on a new system for this, with scriptable inputs / outputs. It will essentially be a thin HAL with device selection working together with DSPGraph.

    We're breaking the work into many pieces, that we will release separately - with the DSPGraph engine being the first out.
     
    NikH, eizenhorn, 5argon and 5 others like this.
  8. mattboch

    mattboch

    Joined:
    Jan 13, 2015
    Posts:
    5
    Is the plan to continue using the FMOD API & prop up the DSPGraph atop it?
     
  9. harini

    harini

    Unity Technologies

    Joined:
    May 28, 2015
    Posts:
    11
    DSPGraph has been developed such that it is already independent of the FMOD APIs. Currently, FMOD is still being used only for Input/Output. As Wayne mentions in the Unite LA talk too, you can take the DSPGraph output and give it to any other third party library or even to OnAudioFilterRead or procedurally generate the audio samples and feed that as input to the graph and build something which does not require FMOD at all.

    So to answer your question, we are working on providing a solution which will enable the users to choose what they want.
     
  10. PublicEnumE

    PublicEnumE

    Joined:
    Feb 3, 2019
    Posts:
    31
    One audio feature I’ve always wanted to do in Unity (which I don’t believe there is a straightforward way to do) is to mix 3D audio from two different listeners.

    For instance, imagine a first person point and click adventure game, with Myst-like crossfade transitions when the player clicks to move. I’d like to also do an cross-fade of the audio between the ‘old’ and ‘new’ locations- fading out audio from where the player was, while fading in audio from where the player is moving to.

    Right now, I don’t think that’s possible in Unity, without some clever hacky trickery.

    Would something like this potentially be possible with the move to ECS audio?
     
    MegamaDev and Cynicat like this.
  11. hippocoder

    hippocoder

    Digital Ape Moderator

    Joined:
    Apr 11, 2010
    Posts:
    25,212
    So what are the reasons to remain with FMOD typically?
     
    Alverik likes this.
  12. Hyp-X

    Hyp-X

    Joined:
    Jun 24, 2015
    Posts:
    121
    Now with 2019.1.0 RC1 I'm getting a console spam:

    Code (csharp):
    1.  
    2. Internal: JobTempAlloc has allocations that are more than 4 frames old - this is not allowed and likely a leak
    3. (Filename: C:\buildslave\unity\build\Runtime/Allocator/ThreadsafeLinearAllocator.cpp Line: 539)
    4.  
    5. To Debug, enable the define: TLA_DEBUG_STACK_LEAK in ThreadsafeLinearAllocator.cpp. This will output the callstacks of the leaked allocations
    6. (Filename: C:\buildslave\unity\build\Runtime/Allocator/ThreadsafeLinearAllocator.cpp Line: 541)
    7.  
    8. Internal: deleting an allocation that is older than its permitted lifetime of 4 frames (age = 5)
    9. (Filename: C:\buildslave\unity\build\Runtime/Allocator/ThreadsafeLinearAllocator.cpp Line: 313)
    10.  
    Is there something I can do with this?
    I don't have access to ThreadsafeLinearAllocator.cpp ...

    Edit:

    I get this with only calling
    Code (csharp):
    1.  
    2. dspCommandBlock.Complete();
    3. dspCommandBlock = dspGraph.CreateCommandBlock();
    from an Update() handler.

    The graph is empty, no sounds are created yet. (Sound playback works correctly, it's just that the editor is not really usable with all the spam.)
     
    Last edited: Apr 11, 2019
  13. Hyp-X

    Hyp-X

    Joined:
    Jun 24, 2015
    Posts:
    121
    The errors come if I disable VSync.

    With VSync set to Every V Blank it works correctly.
    With VSync set to Don't Sync is spams this error.

    It would be good to have a better workaround for this.
     
  14. Tak

    Tak

    Unity Technologies

    Joined:
    Mar 8, 2010
    Posts:
    945
    This is happening because DSPGraph is internally using the temp job allocator when dispatching to other threads, and the frame rate (when not vsynced) is "outrunning" the dispatcher, triggering the allocator's leak heuristics.

    We plan to have this fixed for the 2019.2 preview - for now, I don't have a better workaround for you. :-|
     
    florianhanke likes this.
  15. 5argon

    5argon

    Joined:
    Jun 10, 2013
    Posts:
    1,190
    Wow, this looks really exciting! Audio hadn't progress much from since 5.0.0. Can't wait to SIMD all the mixing and effects. I am so excited that I just went and search for my signal textbook from university.
     
    janusl, MegamaDev, Piefayth and 3 others like this.
  16. TeotiGraphix

    TeotiGraphix

    Joined:
    Jan 11, 2011
    Posts:
    133
    As far as letting the user choose how they hook into the audio callbacks, I already have a use case for this since I am using a custom audio engine in C/C++ and wanted to bring my samples up into and mix with Unity.

    I will definitely be trying this out when it gets some docs.
     
    RobJellinghaus and harini like this.
  17. id0

    id0

    Joined:
    Nov 23, 2012
    Posts:
    267
    Is there any sound occlusion (by walls or something) here (or planned) ?
     
  18. MegamaDev

    MegamaDev

    Joined:
    Jul 17, 2017
    Posts:
    52
    Personally, I cannot wait to finally be able to set non-zero loop starts in my audio clips without having 3+ audio sources acting as one. Even if I have to pull open the codebase and create my own output component/clip type, at least that's (possibly) going to be an option now. (And I'm kind of looking forward to doing it, honestly.)
     
    FROS7 likes this.
  19. 5argon

    5argon

    Joined:
    Jun 10, 2013
    Posts:
    1,190
    In UnityCsReference, if you use the latest 2019.1 branch you could already see the backbone new audio stuff in "Audio" folder. (Many are newly added from diff to 2018.3 branch, seen as green in the picture) Fortunately that Unity is transitioning to visible C# code we can study in the mean time without ECS in the way here. (How a sample provider could stream us bytes for DSP graph, etc.)

    Screenshot 2019-04-22 12.31.56.png
     
    RobJellinghaus and siggigg like this.
  20. Hyp-X

    Hyp-X

    Joined:
    Jun 24, 2015
    Posts:
    121
    Event order when exiting playmode in the editor is somewhat messed up.
    I have an AudioSystem with OnEnable/OnDisable

    When exiting playmode:
    - OnDisable is called in the player scene where I release all DSPNodes
    - OnEnable is called in the editor scene where I re-initialize my graph for editor mode
    - I get a message saying: "Destroyed 1 DSPNodes that were not cleaned up. Memory leak may result."

    Clearing up the playmode DSPNodes should happen before OnEnable is called in the editor scene.
     
  21. janusl

    janusl

    Unity Technologies

    Joined:
    Aug 8, 2018
    Posts:
    8
    Right now we're clearing DSPNodes when the C# domain gets reloaded - for safety reasons. Consider it an emergency cleanup, that may or may not overlap with playmode state.
     
  22. Hyp-X

    Hyp-X

    Joined:
    Jun 24, 2015
    Posts:
    121
    The other reload happens after script recompile, but that works as expected.
    1. Unity compiles scripts
    2. Bails if there's a compile error
    3. OnDisable is called on everything (*)
    4. C# domain gets reloaded
    5. OnEnable is called on everything (*)

    This is THE reason I use OnEnable for initialization, as for example Start is not called after script recompile.

    * I'm talking about [ExecuteInEditMode] only. For playmode scripts it's much simpler, their lifetime is not overlapping with weird stuff like playmode change or script recompile. But I DO need audio in editor mode.

    The issue is: when I exit playmode the emergency cleanup happens AFTER I already built the editor DSP graph.
    I'll work around it with delay calls but the code will get ugly fast...

    Edit: The cleanup happens after the first round of Update calls even...
     
    Last edited: Apr 26, 2019
  23. Hyp-X

    Hyp-X

    Joined:
    Jun 24, 2015
    Posts:
    121
    Edit: After more testing I found that even delayCall doesn't work 100%
    It works around 90%-95% of the time.

    So, what event or callback can I use after exiting playmode that comes after the cleanup?
     
    Last edited: Apr 26, 2019
  24. RobJellinghaus

    RobJellinghaus

    Joined:
    Jul 10, 2017
    Posts:
    14
    This is immensely exciting, it's great to see audio get the DOTS treatment.

    My main question is this: is it currently possible to write a "loopback" application, that takes incoming microphone-captured audio and plays it through a spatialized emitter in the scene? Think of it as like a "loudspeaker" app, which blasts whatever you say in realtime out of a 3D audio emitter object off in the distance of the scene.

    Is DOTS being engineered to have a really low latency round-trip audio path? That is, if I wrote such a "loudspeaker" app, what would be the total latency between my tapping the microphone and my hearing the emitted "tap" from the spatialized loudspeaker object? Most live looping or perforrmance audio systems try to get round-trip audio latency under 10 msec or so (e.g. 64 sample buffer at 48Khz = 1.3 msec buffering latency). Is that likely to be possible with DOTS? Will it depend on what input/output audio stack it's coupled to? (And are there any low latency output audio stacks that handle Unity spatialization properly?)

    Really low latency spatialization is something I have been wondering about, and if the new DOTS audio stack helps handle that, it would be fantastic. Certainly DOTS should finally make it possible to write audio code in C# that can reliably meet low-latency tiny-buffer audio deadlines! (There's no deadline like an audio deadline, 'cause an audio deadline can't miss....)

    Context is that I've ported my http://holofunk.com gesture-controlled audiovisual live looping app (which used ASIO for low roundtrip audio latency on Windows) to Unity. I wound up trying out all the Windows audio APIs (AudioGraph, WASAPI) and finding it impossible to get their roundtrip latency low enough. So I wound up back on ASIO, only this time using JUCE and my own P/Invoke audio wrapper library. But ASIO won't work on HoloLens (it's not even a Microsoft standard), so I'd love a low-latency audio solution that'll work on HoloLens 2.

    If the DOTS audio stack lets me get really good low round-trip latency using Unity only (especially on HoloLens), it will be amazing for my app!

    Also, above you mention a bug fix you hoped to get in the 2019.2 preview. But there's nothing at all in the 2019.2 beta release notes about DOTS audio. Will 2019.2 have DOTS fixes?
     
    Last edited: May 12, 2019
  25. 5argon

    5argon

    Joined:
    Jun 10, 2013
    Posts:
    1,190
    From what I understand DOTS Audio is separated from what input/output Unity could use, so by itself it will not change audio path.

    So for example currently we have this pipeline for output :
    1. Imported AudioClip has the byte stream.
    2. AudioSource could read out the stream, the output is depending on import settings.
    3. Multiple AudioSource works together to determine which one will get mixed at the end of frame (priority, spatial settings, amount limit, etc.)
    4. Software mixer works with AudioMixerGroup asset, apply channel effects, and sum up audio to one final stream. This mixer is the built in FMOD?
    5. The final stream is sent to native hardware abstraction layer depending on device, it's Unity job to select. (e.g. OpenSL ES for Android which could talk to HAL)
    With DOTS and IAudioJobs I think we could do whatever to get the final stream for 5. by changing 2. 3. 4. (to interconnecting DSP nodes that transform audio bytes), so that way you reduce latency of soft mixing of FMOD by replacing it, but not the native level.

    - Say I could make a mixer-less scene which could only play 1 AudioClip at time and the next one completely overwrite the previous one. This scene maybe a cutscene where all audio are already pre overlaid and I just want it to be on sync with the cutscene. Maybe because it is a single stream I could program the native side to keep looking at a certain buffer and just send that out, and so could cut time that Unity have to interface with native. (Which I don't know what happened without source code access)
    - I could make a new mixer that doesn't care about spatialization for my simple 2D game to improve performance.
     
    RobJellinghaus likes this.
  26. Hyp-X

    Hyp-X

    Joined:
    Jun 24, 2015
    Posts:
    121
    Looks like
    Unity.Experimental.Audio
    has been removed from Unity 2019.2

    There's
    Unity.Audio
    but it only contains
    *Internal
    classes eg.
    DSPCommandBlockInternal


    Where's the rest of the API?
    I was looking in the package manager, enabled experimental packages, but I couldn't find the one for DSPGraph.
     
  27. rizu

    rizu

    Joined:
    Oct 8, 2013
    Posts:
    1,008
    from: https://forum.unity.com/threads/megacity-feedback-discussion.647248/#post-4343740
    It's coming as package for 2019.2 but it's not available yet afaik (at least I haven't seen anything related on bintray yet).

    Would be nice to get some estimate when the package is coming though (before full 2019.2 launch, after it?).
     
    thelebaron and abstractmachine like this.
  28. harini

    harini

    Unity Technologies

    Joined:
    May 28, 2015
    Posts:
    11
    Correct! We are working on it but do not have exact dates yet. Will update this thread with more information soon!
     
    RobJellinghaus and FROS7 like this.
  29. harini

    harini

    Unity Technologies

    Joined:
    May 28, 2015
    Posts:
    11
    As part of the Megacity release, we also wrote a document describing how we approached creating sound in Megacity. This document (check https://unity.com/megacity for more information) could give you some insights on how we used ECS and Burst with DSPGraph. Please do check it out and give us your feedback and comments!
     
    RobJellinghaus and Akshara like this.
  30. ReaktorDave

    ReaktorDave

    Joined:
    May 8, 2014
    Posts:
    104
    This sounds very promising. Will the thin HAL be able to access ASIO devices on Windows? Would be extremely useful for installations with custom mixing tools and speaker setups (we use a 192 wavefield synthesis speaker setup).
     
    RobJellinghaus, FROS7 and Soaryn like this.
  31. Soaryn

    Soaryn

    Joined:
    Apr 17, 2015
    Posts:
    173
    I too would very much like this.
     
    RobJellinghaus likes this.
  32. Dunskat

    Dunskat

    Joined:
    Oct 17, 2017
    Posts:
    1
    I have a question: will there be a path to working with existing spatializers in DSPGraph? or an API to inject whatever parameters the existing spatializers need from the sound emitter about the position and outBuffer, and get the resulting processed buffer?
    Or is there work being done with other spatializer developers to have an existing product on release?

    And related to this question - will there be a path for enclosed audio processing plug-ins to be created for unity?
     
  33. janusl

    janusl

    Unity Technologies

    Joined:
    Aug 8, 2018
    Posts:
    8
    Currently no, but certainly in the future (there's no system for incoming audio into DSPGraph yet - will be a part of the HAL).

    The HAL will make it possible to select and initialize available drivers on your system. So the latency should be as low as your device allows, for any specific configuration.

    The HAL will come with some standard drivers as we add platform implementations (Wasapi, CoreAudio etc). Supporting low level systems like ASIO is however one of the major reasons we're pushing this.

    While it may not be the first driver to be supported, the HAL is defined in terms of C# plugins so anything can be added down the line by users as well.

    No immediate API right now, no. For the time being, spatializers (and native audio plugins for that matter) can be P/Invoked perfectly well from DSP kernels.

    Probably to the extent that C# can provide (precompiled assemblies, obfuscation). Alternatively, a C# DSP kernel can P/Invoke into a native library if needed.
     
    Dunskat, RobJellinghaus and 5argon like this.
  34. RobJellinghaus

    RobJellinghaus

    Joined:
    Jul 10, 2017
    Posts:
    14
    Excellent, very glad this is on your radar! I am having difficulty getting a JUCE-based native audio plugin to work as well under Unity (native Windows target) as it does under my .NET test app. Posted for help about it over at the JUCE forums... since the Mono runtime doesn't support debugging into native DLLs, debugging this under Unity is extremely slow. JUCE is wonderful but if the HAL allows me to actually debug down into the Burst-compiled "native" C# that my loopback audio library is written in, I will be absolutely thrilled.

    ...And, maybe I should ask: will it be possible to debug into the Burst-compiled C# running in my app's DSPGraph? When targeting a native PC app on Windows? (better not get too excited just yet....)

    OH YES! AND! Will all this run on HoloLens 2? With audio DSPGraph debugging, even? Never hurts to ask for the moon.

    Also, I noticed this in the document talking about the Megacity audio project:

    That update rate is given by the DSP block size defined in Unity’s Audio Project settings (typically 512, 1024 or 2048 sample blocks with target platform-specific variations), and the system’s sample rate (typically 44100 or 48000 samples/second), so somewhere in the range of 10-45ms.

    My FocusRite 2i2 USB interface's ASIO drivers can be turned all the way to 16 sample blocks at 192Khz. (And that's a mid-end interface, $160 or so.) I hear you saying above that you'll be supporting whatever the hardware can do, so I wanted to provide this data point. Realtime performance audio apps need <10msec latency total including all buffering and FX, and that's an audio latency target I don't often see mentioned, so I hope you include it in your testing when you bring the input HAL online.

    If you can write a "loopback" app that just routes input directly to output, and if you can tap the mike and hear it on the output with no perceptible delay whatsoever, then you'll unlock performance audio apps on Unity that are otherwise literally impossible. Of course not every platform can do this at the driver/hardware level (hello Android), but iOS can, and macOS can, and ASIO on Windows can....
     
    Last edited: May 28, 2019
    davidbarbera9 likes this.
  35. erwincoumans

    erwincoumans

    Joined:
    Oct 24, 2012
    Posts:
    3
    A DSP graph like VCV Rack, with wires and modules GUI would be amazing, so you can even create a Eurorack with 3d sound in the loop.
     
  36. rizu

    rizu

    Joined:
    Oct 8, 2013
    Posts:
    1,008
  37. Excelerator

    Excelerator

    Joined:
    Nov 14, 2013
    Posts:
    32
    I've been wanting this for years! :eek:

    In fact this is the only reason I came to this thread to see if anyone was talking about it. We need this for split screen games too!

    Let's make it happen! Multiple audio listener support!
     
    Last edited: Jun 8, 2019
  38. hippocoder

    hippocoder

    Digital Ape Moderator

    Joined:
    Apr 11, 2010
    Posts:
    25,212
    It's a bit niche but typically I would imagine it's handled with left/right panning? or do you mean routing streams to different outputs? such as headphones for one and speakers for another ? seems like the panning is probably more relevant for more home gamers if players are nearby.

    Alternatively:

    Visualise it like a minimap problem: the audio listener is at the center 0,0,0 facing forward and the sources are shifted to that local space, then you can have everything you're asking for, for any number of players.
     
  39. PublicEnumE

    PublicEnumE

    Joined:
    Feb 3, 2019
    Posts:
    31
    There was a two week silence after I asked that question, so I initially dropped it. :p

    But it would still be very useful to me. My main purpose would be to cross-fade audio between two points in space - something I still don’t believe there’s a great way to do in Unity (though I would love to learn otherwise).

    It’s a feature that may also be useful to people using Unity for real-time film projects or advertising. I would love to see it implemented, or to have low level access to be able to implement it ourselves.
     
  40. hippocoder

    hippocoder

    Digital Ape Moderator

    Joined:
    Apr 11, 2010
    Posts:
    25,212
    That's the thing though it's easily done with existing Unity but it requires abstracting where the audio sources are. They need to all be in the same local space.

    The two locations of sound are spatially in the same place, moved around the listener, which doesn't move.
    Then using the existing audio mixer to crossfade between the two sets which are local to the listener.

    Obviously it would be simpler and not cost money to hire a unity developer to have it built in (which would just do this under the hood anyway since there's only one real listener).
     
  41. PublicEnumE

    PublicEnumE

    Joined:
    Feb 3, 2019
    Posts:
    31
    What you’re describing here is what I’ve done in the past, but I’ve always considered it a hacky approach. :p And it doesn’t work well in all cases (like times when you would end up with overlapping reverb volumes).

    That said, if the goal of DOTS audio is to reproduce the existing Unity audio feature set in a DOTS environment, maybe requests for this new feature aren’t appropriate for the thread.
     
    hippocoder likes this.
  42. Excelerator

    Excelerator

    Joined:
    Nov 14, 2013
    Posts:
    32
    Which is what I've been doing, but with big performance costs and quirks. I would imagine DOTS could help optimize this and make it a Unity standard to allow multiple audio listeners. I'm not familiar enough to know if this is in the scope of DOTS audio, or if I need to bring this request elsewhere, but I imagine it could be beneficial to coordinate the development.

    I would not have any need to do that. All I need is to have multiple 3D listener locations which can be mixed together or interpolated.
     
    Last edited: Jun 11, 2019
    PublicEnumE likes this.
  43. Tak

    Tak

    Unity Technologies

    Joined:
    Mar 8, 2010
    Posts:
    945
    unity_KNMBKAJDXv98xw, Hyp-X and rizu like this.
  44. janusl

    janusl

    Unity Technologies

    Joined:
    Aug 8, 2018
    Posts:
    8
    Yes, it will be. Support is already there on some platforms to varying degree, including Windows.

    I will look into it.

    Yes, we hear you. As mentioned a large part of this effort is to enable lower latency systems, in a plugin-able way. So even if we don't provide the best driver support initially, it will not block unlocking a device's full potential whilst still using the rest of our tech stack.

    The purpose of the hal is just to serve a buffer to the session driver, but as I'm sure you know the resulting latency of a session is a product of many variables, so a hard number from our side rarely makes sense - we can offer what the system offers, which is what we aim for. For the curious, here's a comparison of attainable latencies from recent mobile devices done by ROLI/JUCE:

    https://juce.com/discover/stories/Mobile performance index
     
    GliderGuy and Akshara like this.
  45. Hyp-X

    Hyp-X

    Joined:
    Jun 24, 2015
    Posts:
    121
    I made it to compile with 2019.2.0b6 \o/
    Now to make it work.

    I see that
    DSPGraph.GetDefaultGraph();
    is gone so I can only use
    DSPGraph.Create()

    But then I have no idea how to connect it to the soundcard output.
     
  46. Tak

    Tak

    Unity Technologies

    Joined:
    Mar 8, 2010
    Posts:
    945
    You'll do something like this:
    Code (CSharp):
    1. // Create the graph
    2. var graph = DSPGraph.Create(SoundFormat.Stereo, 2, 1024, 48000);
    3.  
    4. // Create a barebones driver that just drives the graph mix based on when the system wants samples
    5. var driver = new DefaultDSPGraphDriver { Graph = graph };
    6.  
    7. // Attach this driver and graph to the audio output that's configured in Unity
    8. var outputHandle = driver.AttachToDefaultOutput();
    More fleshed-out documentation and examples are coming :)
     
    rizu likes this.
  47. Hyp-X

    Hyp-X

    Joined:
    Jun 24, 2015
    Posts:
    121
    I don't see any class named
    DefaultDSPGraphDriver
    in com.unity.audio.dspgraph or inside any of the engine .dll's.
    Also the only mention of
    AttachToDefaultOutput
    is in
    AudioOutputExtensions.AttachToDefaultOutput()
    which expects a struct that implements
    IAudioOutput
    but no implementation is provided in the package.
     
  48. Hyp-X

    Hyp-X

    Joined:
    Jun 24, 2015
    Posts:
    121
    Ok I implemented it as:

    Code (CSharp):
    1.         struct DefaultDSPGraphDriver : IAudioOutput
    2.         {
    3.             public DSPGraph dspGraph;
    4.             private int channelCount;
    5.  
    6.             public void Initialize(int channelCount, SoundFormat format, int sampleRate, long dspBufferSize)
    7.             {
    8.                 this.channelCount = channelCount;
    9.             }
    10.  
    11.             public void BeginMix(int frameCount)
    12.             {
    13.                 dspGraph.BeginMix(frameCount);
    14.             }
    15.  
    16.             public void EndMix(NativeArray<float> output, int frames)
    17.             {
    18.                 dspGraph.ReadMix(output, frames, channelCount);
    19.             }
    20.  
    21.             public void Dispose()
    22.             {
    23.             }
    24.         }
    25.  
    Now I got to the point when I can play a sound \o/
    Then the Unity Editor disappears without a (stack) trace.
    There's nothing in the log, the crash handler doesn't come up.
     
  49. Hyp-X

    Hyp-X

    Joined:
    Jun 24, 2015
    Posts:
    121
    Turning off Burst caught a NullPointerException in my AudioKernel.
    Now Unity crashes with a stack trace:

    Code (CSharp):
    1. 0x00007FF70A2B5DC8 (Unity) DSPGraph::ProcessCommandQueue
    2. 0x00007FF70A2BC36A (Unity) DSPGraph::SetupBeginMix
    3. 0x00007FF70A2B0F74 (Unity) DSPGraph::BeginMix
    4. 0x00007FF70A2B32EA (Unity) Internal_BeginMix
    5. 0x00007FF70A78B658 (Unity) DSPGraphInternal_CUSTOM_Internal_BeginMix
    6. 0x000001C83DF8A690 (Mono JIT Code) (wrapper managed-to-native) Unity.Audio.DSPGraphInternal:Internal_BeginMix (Unity.Audio.Handle&,int,int)
    7. 0x000001C83DF8A39B (Mono JIT Code) [E:\Work\*********\Library\PackageCache\com.unity.audio.dspgraph@0.1.0-preview.3\Runtime\DSPGraph.cs:38] Unity.Audio.DSPGraph:BeginMix (int,Unity.Audio.DSPGraph/ExecutionMode)
    8. 0x000001C83DF8A2A3 (Mono JIT Code) [E:\Work\*********\Assets\Engine\Kite.Audio\Runtime\System\AudioSystem.cs:48] Kite.Audio.AudioSystem/DefaultDSPGraphDriver:BeginMix (int)
    9. 0x000001C83DF88E0B (Mono JIT Code) [E:\Work\*********\Library\PackageCache\com.unity.audio.dspgraph@0.1.0-preview.3\Runtime\Extensions\AudioOutputExtensions.cs:61] Unity.Audio.AudioOutputExtensions/AudioOutputHookStructProduce`1<Kite.Audio.AudioSystem/DefaultDSPGraphDriver>:Execute (Kite.Audio.AudioSystem/DefaultDSPGraphDriver&,intptr,intptr,Unity.Jobs.LowLevel.Unsafe.JobRanges&,int)
    10. 0x000001C83DF89C62 (Mono JIT Code) (wrapper delegate-invoke) Unity.Audio.AudioOutputExtensions/AudioOutputHookStructProduce`1/ExecuteJobFunction<Kite.Audio.AudioSystem/DefaultDSPGraphDriver>:invoke_void_TOutput&_intptr_intptr_JobRanges&_int (Kite.Audio.AudioSystem/DefaultDSPGraphDriver&,intptr,intptr,Unity.Jobs.LowLevel.Unsafe.JobRanges&,int)
    11. 0x000001C83DF88049 (Mono JIT Code) (wrapper runtime-invoke) <Module>:runtime_invoke_void__this___intptr&_intptr_intptr_intptr&_int (object,intptr,intptr,intptr)
    12. 0x00007FFB3C0EC11B (mono-2.0-bdwgc) [c:\users\builduser\builds\tsuywz8z\0\vm\mono\mono\mini\mini-runtime.c:2809] mono_jit_runtime_invoke
    13. 0x00007FFB3C072282 (mono-2.0-bdwgc) [c:\users\builduser\builds\tsuywz8z\0\vm\mono\mono\metadata\object.c:2921] do_runtime_invoke
    14. 0x00007FFB3C07B27F (mono-2.0-bdwgc) [c:\users\builduser\builds\tsuywz8z\0\vm\mono\mono\metadata\object.c:2968] mono_runtime_invoke
    15. 0x00007FF70CBBB3E2 (Unity) scripting_method_invoke
    16. 0x00007FF70CBB50A1 (Unity) ScriptingInvocation::Invoke
    17. 0x00007FF70C56A0F0 (Unity) ExecuteJobWithSharedJobData
    18. 0x00007FF70A29E11C (Unity) AudioOutputHookManager::RunBeginMixJobs
    19. 0x00007FF70A881E25 (Unity) AudioManager::systemCallback
    20. 0x00007FF70DB41D5A (Unity) FMOD::DSPSoundCard::read
    21. 0x00007FF70DAF5349 (Unity) FMOD::Output::mix
    22. 0x00007FF70DB3D078 (Unity) FMOD::OutputWASAPI::mixerUpdate
    23. 0x00007FF70DACDB66 (Unity) FMOD::Thread::callback
    24. 0x00007FF70E1B98AC (Unity) thread_start<unsigned int (__cdecl*)(void * __ptr64)>
    25. 0x00007FFB9DBE7974 (KERNEL32) BaseThreadInitThunk
    26. 0x00007FFB9E08A271 (ntdll) RtlUserThreadStart
    27.  
    It suspect my cleanup code running after the sound ended. The following code was put together looking at the Megacity version and was only minimally modified for 2019.2 to compile.

    Code (CSharp):
    1.                 block.SetSampleProvider<VoicePlayerJob.Params, VoicePlayerJob.Providers, VoicePlayerJob>(
    2.                     (AudioClip)null, playerDspNode, VoicePlayerJob.Providers.Sample, 0);
    3.  
    4.                 VoicePlayerDisposeJob disposeJob = new VoicePlayerDisposeJob();
    5.                 block.CreateUpdateRequest<VoicePlayerDisposeJob, VoicePlayerJob.Params, VoicePlayerJob.Providers, VoicePlayerJob>(
    6.                     disposeJob, playerDspNode, request =>
    7.                     {
    8.                         request.Dispose();
    9.                         voiceCompleted.Dispose();
    10.                     });
    11.  
    12.                 block.ReleaseDSPNode(playerDspNode);
    13.                 block.ReleaseDSPNode(monoToStereoDspNode);
     
  50. Hyp-X

    Hyp-X

    Joined:
    Jun 24, 2015
    Posts:
    121
    I got to the point when I can run DSPGraph starting and completing sounds for multiple seconds before Unity crashes.
    Things I found out:

    Code (CSharp):
    1. block.SetSampleProvider<VoicePlayerJob.Params, VoicePlayerJob.Providers, VoicePlayerJob>(
    2.                     (AudioClip)null, playerDspNode, VoicePlayerJob.Providers.Sample, 0);
    causes a crash later, so I simply commented it out.
    Adding a DisposeJob threw errors so I tried to get rid of it.

    I removed the
    voiceCompleted
    signaling and went back to use
    PostEvent

    I found that you need to call
    DSPGraph.Update()
    for Events to be delivered. (That wasn't a requirement before)

    Not calling the right things during shutdown will crash Unity so there was a lot of trial and error starting Unity again and again.

    When I run the system for a few seconds I get a crash in:
    Code (CSharp):
    1. 0x00007FF70A29D49E (Unity) FreeArrayForDSPGraph
    2. 0x00007FF70CCFAB24 (Unity) UnsafeUtility_CUSTOM_Free
    3. 0x000001BF06DEF837 (Mono JIT Code) (wrapper managed-to-native) Unity.Collections.LowLevel.Unsafe.UnsafeUtility:Free (void*,Unity.Collections.Allocator)
    4. 0x000001BF06DEE9A3 (Mono JIT Code) [C:\buildslave\unity\build\Runtime\Export\NativeArray\NativeArray.cs:165] Unity.Collections.NativeArray`1<single>:Dispose ()
    5. 0x000001BF06DEE75B (Mono JIT Code) [E:\Work\*******\Assets\Engine\Kite.Audio\Runtime\DSP\VoicePlayer.cs:94] VoicePlayerJob:Dispose ()
    6. 0x000001BF06DEE673 (Mono JIT Code) [E:\Work\*******\Library\PackageCache\com.unity.audio.dspgraph@0.1.0-preview.3\Runtime\Extensions\AudioKernelExtensions.cs:260] Unity.Audio.AudioKernelExtensions/AudioKernelJobStructProduce`3<VoicePlayerJob, VoicePlayerJob/Params, VoicePlayerJob/Providers>:DisposeKernel (VoicePlayerJob&)
    7. 0x000001BF06DDE783 (Mono JIT Code) [E:\Work\*******\Library\PackageCache\com.unity.audio.dspgraph@0.1.0-preview.3\Runtime\Extensions\AudioKernelExtensions.cs:196] Unity.Audio.AudioKernelExtensions/AudioKernelJobStructProduce`3<VoicePlayerJob, VoicePlayerJob/Params, VoicePlayerJob/Providers>:Execute (VoicePlayerJob&,intptr,intptr,Unity.Jobs.LowLevel.Unsafe.JobRanges&,int)
    8. 0x000001BF06DDEAF2 (Mono JIT Code) (wrapper delegate-invoke) Unity.Audio.AudioKernelExtensions/AudioKernelJobStructProduce`3/ExecuteKernelFunction<VoicePlayerJob, VoicePlayerJob/Params, VoicePlayerJob/Providers>:invoke_void_TAudioKernel&_intptr_intptr_JobRanges&_int (VoicePlayerJob&,intptr,intptr,Unity.Jobs.LowLevel.Unsafe.JobRanges&,int)
    9. 0x000001BF04276EA9 (Mono JIT Code) (wrapper runtime-invoke) <Module>:runtime_invoke_void__this___intptr&_intptr_intptr_intptr&_int (object,intptr,intptr,intptr)
    10. 0x00007FFB6310C11B (mono-2.0-bdwgc) [c:\users\builduser\builds\tsuywz8z\0\vm\mono\mono\mini\mini-runtime.c:2809] mono_jit_runtime_invoke
    11. 0x00007FFB63092282 (mono-2.0-bdwgc) [c:\users\builduser\builds\tsuywz8z\0\vm\mono\mono\metadata\object.c:2921] do_runtime_invoke
    12. 0x00007FFB6309B27F (mono-2.0-bdwgc) [c:\users\builduser\builds\tsuywz8z\0\vm\mono\mono\metadata\object.c:2968] mono_runtime_invoke
    13. 0x00007FF70CBBB3E2 (Unity) scripting_method_invoke
    14. 0x00007FF70CBB50A1 (Unity) ScriptingInvocation::Invoke
    15. 0x00007FF70C56A0F0 (Unity) ExecuteJobWithSharedJobData
    16. 0x00007FF70A2B47CE (Unity) DSPGraph::MainThreadUpdate
    17. 0x00007FF70A78BEC7 (Unity) DSPGraphInternal_CUSTOM_Internal_Update
    18. 0x000001BF06973ABE (Mono JIT Code) (wrapper managed-to-native) Unity.Audio.DSPGraphInternal:Internal_Update (Unity.Audio.Handle&)
    19. 0x000001BF0697398B (Mono JIT Code) [E:\Work\*******\Library\PackageCache\com.unity.audio.dspgraph@0.1.0-preview.3\Runtime\DSPGraph.cs:177] Unity.Audio.DSPGraph:Update ()
    20. 0x000001BF069724FB (Mono JIT Code) [E:\Work\*******\Assets\Engine\Kite.Audio\Runtime\System\AudioSystem.cs:283] Kite.Audio.AudioSystem:Update ()
    21. 0x000001BF27AE0168 (Mono JIT Code) (wrapper runtime-invoke) object:runtime_invoke_void__this__ (object,intptr,intptr,intptr)
    22. 0x00007FFB6310C11B (mono-2.0-bdwgc) [c:\users\builduser\builds\tsuywz8z\0\vm\mono\mono\mini\mini-runtime.c:2809] mono_jit_runtime_invoke
    23. 0x00007FFB63092282 (mono-2.0-bdwgc) [c:\users\builduser\builds\tsuywz8z\0\vm\mono\mono\metadata\object.c:2921] do_runtime_invoke
    24. 0x00007FFB6309B27F (mono-2.0-bdwgc) [c:\users\builduser\builds\tsuywz8z\0\vm\mono\mono\metadata\object.c:2968] mono_runtime_invoke
    25. 0x00007FF70CBBB3E2 (Unity) scripting_method_invoke
    26. 0x00007FF70CBB50A1 (Unity) ScriptingInvocation::Invoke
    27. 0x00007FF70CB6F7CB (Unity) MonoBehaviour::CallMethodIfAvailable
    28. 0x00007FF70CB6FBB6 (Unity) MonoBehaviour::CallUpdateMethod
    29. 0x00007FF70C2802A8 (Unity) BaseBehaviourManager::CommonUpdate<BehaviourManager>
    30. 0x00007FF70C288B24 (Unity) BehaviourManager::Update
    31. 0x00007FF70C697113 (Unity) `InitPlayerLoopCallbacks'::`2'::UpdateScriptRunBehaviourUpdateRegistrator::Forward
    32. 0x00007FF70C6807D8 (Unity) ExecutePlayerLoop
    33. 0x00007FF70C6808B9 (Unity) ExecutePlayerLoop
    34. 0x00007FF70C68572B (Unity) PlayerLoop
    35. 0x00007FF70AC65BCB (Unity) PlayerLoopController::UpdateScene
    36. 0x00007FF70AC636CC (Unity) Application::TickTimer
    37. 0x00007FF70B4E6ADD (Unity) MainMessageLoop
    38. 0x00007FF70B4E951F (Unity) WinMain
    39. 0x00007FF70E172882 (Unity) __scrt_common_main_seh
    40. 0x00007FFB9DBE7974 (KERNEL32) BaseThreadInitThunk
    41. 0x00007FFB9E08A271 (ntdll) RtlUserThreadStart
    42.  
    So far this all feels like navigating a minefield blindfolded. At least in 2019.1 I had Megacity which I knew contained code that actually works.