DOTS Audio Discussion

Hi everyone,

As requested by you, we are starting a dedicated space to talk about our new DOTS Audio system and specifically DSPGraph that a lot of you have been exploring through the Megacity demo. We would like to use this thread as a means of getting feedback, requirements, and having candid discussions about your audio needs. Please feel free to post any requests, concerns, or issues you have with the new DOTS-based Audio system in this thread.

Also look out in this space for new announcements and updates as we go forward building our new DOTS based Audio system in Unity!

You can check out more about our design philosophy and approach with Wayne’s Unite LA talk about DOTS Audio and DSPGraph.

Please do note that DSPGraph is still under development and we are re-working our APIs. Please do expect things to change and consider this as a very early version for you all to play with!

13 Likes

When we can expect basic documentation and DOTS Audio as package? :slight_smile:

7 Likes

From what I understand this system generates a graph similar to the way the Playable graph works, and then there’s a visualiser. Are there plans for making it a system like Shader/VFX Graph, where the authoring can happen in a UI and not just code? I also wonder why the UI design here is disparate compared to the other tools.

1 Like

We’re targeting Unity 2019.2 for the first preview package of DSPGraph, the core of what will be DOTS audio.

6 Likes

Curiously, with this system, would we be able yo choose sound devices to input output to and from? Or would we still be ljmited to Unity’s current system for that? Being able to provide mic inputs via direct DOTS integration would be neat.

I imagine having a few mic inputs and potentially 4 output sources providing different tracks (music, voip, sound fx)

So I post my question here because it’s a better place:
I see that an IAudioJob containing ExecuteContext.PostEvent doesn’t compile with Burst.
Will this be supported by Burst at some point or will the API change?

I see the Megacity demo avoids this API even though it’s part of the presentation.

I can workaround this with a 1 element array

    [NativeDisableContainerSafetyRestriction]
    public NativeArray<bool> voiceCompleted;

and polling that bool in the main thread, but events look more clean.

[quote=“vertxxyz, post:3, topic: 737349, username:vertxxyz”]
Are there plans for making it a system like Shader/VFX Graph, where the authoring can happen in a UI and not just code?
[/quote]Yes (no details yet)

[quote=“vertxxyz, post:3, topic: 737349, username:vertxxyz”]
I also wonder why the UI design here is disparate compared to the other tools.
[/quote]As you mentioned yourself, there’s a difference between visualizers and authoring tools. GraphView (the UI framework Shader/VFX Graph is built on) is an authoring tool, not really a read-only real time visualization. This was just a prototype though, so it will align in the end :slight_smile:

Yes, it will be supported soon.

We’re working on a new system for this, with scriptable inputs / outputs. It will essentially be a thin HAL with device selection working together with DSPGraph.

We’re breaking the work into many pieces, that we will release separately - with the DSPGraph engine being the first out.

8 Likes

Is the plan to continue using the FMOD API & prop up the DSPGraph atop it?

DSPGraph has been developed such that it is already independent of the FMOD APIs. Currently, FMOD is still being used only for Input/Output. As Wayne mentions in the Unite LA talk too, you can take the DSPGraph output and give it to any other third party library or even to OnAudioFilterRead or procedurally generate the audio samples and feed that as input to the graph and build something which does not require FMOD at all.

So to answer your question, we are working on providing a solution which will enable the users to choose what they want.

5 Likes

One audio feature I’ve always wanted to do in Unity (which I don’t believe there is a straightforward way to do) is to mix 3D audio from two different listeners.

For instance, imagine a first person point and click adventure game, with Myst-like crossfade transitions when the player clicks to move. I’d like to also do an cross-fade of the audio between the ‘old’ and ‘new’ locations- fading out audio from where the player was, while fading in audio from where the player is moving to.

Right now, I don’t think that’s possible in Unity, without some clever hacky trickery.

Would something like this potentially be possible with the move to ECS audio?

2 Likes

So what are the reasons to remain with FMOD typically?

1 Like

Now with 2019.1.0 RC1 I’m getting a console spam:

Internal: JobTempAlloc has allocations that are more than 4 frames old - this is not allowed and likely a leak
(Filename: C:\buildslave\unity\build\Runtime/Allocator/ThreadsafeLinearAllocator.cpp Line: 539)

To Debug, enable the define: TLA_DEBUG_STACK_LEAK in ThreadsafeLinearAllocator.cpp. This will output the callstacks of the leaked allocations
(Filename: C:\buildslave\unity\build\Runtime/Allocator/ThreadsafeLinearAllocator.cpp Line: 541)

Internal: deleting an allocation that is older than its permitted lifetime of 4 frames (age = 5)
(Filename: C:\buildslave\unity\build\Runtime/Allocator/ThreadsafeLinearAllocator.cpp Line: 313)

Is there something I can do with this?
I don’t have access to ThreadsafeLinearAllocator.cpp …

Edit:

I get this with only calling

dspCommandBlock.Complete();
dspCommandBlock = dspGraph.CreateCommandBlock();

from an Update() handler.

The graph is empty, no sounds are created yet. (Sound playback works correctly, it’s just that the editor is not really usable with all the spam.)

The errors come if I disable VSync.

With VSync set to Every V Blank it works correctly.
With VSync set to Don’t Sync is spams this error.

It would be good to have a better workaround for this.

This is happening because DSPGraph is internally using the temp job allocator when dispatching to other threads, and the frame rate (when not vsynced) is “outrunning” the dispatcher, triggering the allocator’s leak heuristics.

We plan to have this fixed for the 2019.2 preview - for now, I don’t have a better workaround for you. :expressionless:

1 Like

Wow, this looks really exciting! Audio hadn’t progress much from since 5.0.0. Can’t wait to SIMD all the mixing and effects. I am so excited that I just went and search for my signal textbook from university.

9 Likes

As far as letting the user choose how they hook into the audio callbacks, I already have a use case for this since I am using a custom audio engine in C/C++ and wanted to bring my samples up into and mix with Unity.

I will definitely be trying this out when it gets some docs.

2 Likes

Is there any sound occlusion (by walls or something) here (or planned) ?

1 Like

Personally, I cannot wait to finally be able to set non-zero loop starts in my audio clips without having 3+ audio sources acting as one. Even if I have to pull open the codebase and create my own output component/clip type, at least that’s (possibly) going to be an option now. (And I’m kind of looking forward to doing it, honestly.)

1 Like

In UnityCsReference, if you use the latest 2019.1 branch you could already see the backbone new audio stuff in “Audio” folder. (Many are newly added from diff to 2018.3 branch, seen as green in the picture) Fortunately that Unity is transitioning to visible C# code we can study in the mean time without ECS in the way here. (How a sample provider could stream us bytes for DSP graph, etc.)

3 Likes

Event order when exiting playmode in the editor is somewhat messed up.
I have an AudioSystem with OnEnable/OnDisable

When exiting playmode:

  • OnDisable is called in the player scene where I release all DSPNodes
  • OnEnable is called in the editor scene where I re-initialize my graph for editor mode
  • I get a message saying: “Destroyed 1 DSPNodes that were not cleaned up. Memory leak may result.”

Clearing up the playmode DSPNodes should happen before OnEnable is called in the editor scene.