A Unity ID allows you to buy and/or subscribe to Unity products and services, shop in the Asset Store and participate
in the Unity community.
Discussion in 'Data Oriented Technology Stack' started by harini, Mar 28, 2019.
Thank you for that info. Happy Holidays!
The minimum supported version is 2019.2.8f1
Does DOTS audio allow to write to the audio buffer without allocating any memory like onAudioFilterRead does since Unity 5.5 roughly(was fine before).
I have been told it's because you attach and detach the mono domain on every read, which allocates GC memory.
If yes could we have a simple example of how to write on the output buffer without using onAudioFilterRead ?
I'm writing a basic midi synth. I'm experiencing a noticeable amount of latency when I Play/Pause using a CommandBlock.UpdateAudioKernel call, which can be difficult to adjust for when using a midi controller. What sorts of things should I be looking for to reduce that if possible?
Yes. There are some simple examples embedded in the com.unity.audio.dspgraph package, including writing procedurally-generated samples to the default output stream and playing a clip.
You could try controlling play/pause via a parameter on the kernel instead, and/or reducing the dsp buffer size for the graph.
Today, there will always be up to DSPBufferSize samples of latency, because command blocks are applied between mixes, and UpdateAudioKernel actually executes an additional job (the update job you supply).
Thanks! The ScheduleParameter example seem quite straightforward.
I don't understand the purpose of the DSPCommandBlock however.
Since you can recreate it every time from the graph why expose it? Wouldn't it be better to just hide it in the API?
Or are there scenarios when it's useful to store it?
Are you supposed to NOT store it, because it could change at runtime?
DSPCommandBlock is for gathering groups of changes together to be applied atomically
Thanks! It's clear now.
Is there a way to expose the output buffer outside the AudioKernel?
I have a quite complex system that rely on several monobehaviour right now and it would be quite lengthy to convert everything to the new system.
Right now I am using MonoPInvokeCallback and IntPtr to share the buffer between native code and managed-
Is there an equivalent for DOTS?
Sorry for the long delay.
If I understand what you're asking, then not really.
However, you can implement only part of your system as a graph and drive it manually (calling BeginMix/ReadMix yourself), if that makes sense for your use case.
Is there any info on when any documentation is going to be available for DSP Graph? I mean, this has been anounced over a year ago and there are still no signs of it on Unity roadmap. Can we at least expect anything stable in terms of API this year?
I have made modular synth using modified DSP Graph 0.1.0-preview.11. Basically this modification allowes for not connected inputs to be recognized as connected (buffers are filled with zeros), because I found DSP Graph handling of connections inside kernels very annoying. With some minor modifications this could be used with standard DSP Graph.
This project also contains very hackish/naive implementation of microphone input in graph (someone asked about that earlier).
Here it is: https://github.com/aksyr/Unity-DSP-Graph-Modular-Synth
Will the less low level part of dots audio be a part of this package or another package entirely?
Guys, I'm lost, how to play a collision audio clip on 2 objects collision, in a simple pool game?
The current plan is that there will be a separate DOTS Audio package
You might be in the wrong place, this thread is about low-level, experimental, upcoming audio support for DOTS
Is there any news on the ETA of that package?
Sorry, we don't have any news to announce right now.
What is the general purpose of having multiple SampleProviders in one node? I Assume mixing is done by feeding multiple nodes into another node, right? So why feed more than one sound source into a node via SampleProviders? What am I missing?