As requested by you, we are starting a dedicated space to talk about our new DOTS Audio system and specifically DSPGraph that a lot of you have been exploring through the Megacity demo. We would like to use this thread as a means of getting feedback, requirements, and having candid discussions about your audio needs. Please feel free to post any requests, concerns, or issues you have with the new DOTS-based Audio system in this thread.
Also look out in this space for new announcements and updates as we go forward building our new DOTS based Audio system in Unity!
You can check out more about our design philosophy and approach with Wayneâs Unite LA talk about DOTS Audio and DSPGraph.
Please do note that DSPGraph is still under development and we are re-working our APIs. Please do expect things to change and consider this as a very early version for you all to play with!
From what I understand this system generates a graph similar to the way the Playable graph works, and then thereâs a visualiser. Are there plans for making it a system like Shader/VFX Graph, where the authoring can happen in a UI and not just code? I also wonder why the UI design here is disparate compared to the other tools.
Curiously, with this system, would we be able yo choose sound devices to input output to and from? Or would we still be ljmited to Unityâs current system for that? Being able to provide mic inputs via direct DOTS integration would be neat.
I imagine having a few mic inputs and potentially 4 output sources providing different tracks (music, voip, sound fx)
So I post my question here because itâs a better place:
I see that an IAudioJob containing ExecuteContext.PostEvent doesnât compile with Burst.
Will this be supported by Burst at some point or will the API change?
I see the Megacity demo avoids this API even though itâs part of the presentation.
I can workaround this with a 1 element array
[NativeDisableContainerSafetyRestriction]
public NativeArray<bool> voiceCompleted;
and polling that bool in the main thread, but events look more clean.
[quote=âvertxxyz, post:3, topic: 737349, username:vertxxyzâ]
Are there plans for making it a system like Shader/VFX Graph, where the authoring can happen in a UI and not just code?
[/quote]Yes (no details yet)
[quote=âvertxxyz, post:3, topic: 737349, username:vertxxyzâ]
I also wonder why the UI design here is disparate compared to the other tools.
[/quote]As you mentioned yourself, thereâs a difference between visualizers and authoring tools. GraphView (the UI framework Shader/VFX Graph is built on) is an authoring tool, not really a read-only real time visualization. This was just a prototype though, so it will align in the end
Yes, it will be supported soon.
Weâre working on a new system for this, with scriptable inputs / outputs. It will essentially be a thin HAL with device selection working together with DSPGraph.
Weâre breaking the work into many pieces, that we will release separately - with the DSPGraph engine being the first out.
DSPGraph has been developed such that it is already independent of the FMOD APIs. Currently, FMOD is still being used only for Input/Output. As Wayne mentions in the Unite LA talk too, you can take the DSPGraph output and give it to any other third party library or even to OnAudioFilterRead or procedurally generate the audio samples and feed that as input to the graph and build something which does not require FMOD at all.
So to answer your question, we are working on providing a solution which will enable the users to choose what they want.
One audio feature Iâve always wanted to do in Unity (which I donât believe there is a straightforward way to do) is to mix 3D audio from two different listeners.
For instance, imagine a first person point and click adventure game, with Myst-like crossfade transitions when the player clicks to move. Iâd like to also do an cross-fade of the audio between the âoldâ and ânewâ locations- fading out audio from where the player was, while fading in audio from where the player is moving to.
Right now, I donât think thatâs possible in Unity, without some clever hacky trickery.
Would something like this potentially be possible with the move to ECS audio?
Now with 2019.1.0 RC1 Iâm getting a console spam:
Internal: JobTempAlloc has allocations that are more than 4 frames old - this is not allowed and likely a leak
(Filename: C:\buildslave\unity\build\Runtime/Allocator/ThreadsafeLinearAllocator.cpp Line: 539)
To Debug, enable the define: TLA_DEBUG_STACK_LEAK in ThreadsafeLinearAllocator.cpp. This will output the callstacks of the leaked allocations
(Filename: C:\buildslave\unity\build\Runtime/Allocator/ThreadsafeLinearAllocator.cpp Line: 541)
Internal: deleting an allocation that is older than its permitted lifetime of 4 frames (age = 5)
(Filename: C:\buildslave\unity\build\Runtime/Allocator/ThreadsafeLinearAllocator.cpp Line: 313)
Is there something I can do with this?
I donât have access to ThreadsafeLinearAllocator.cpp âŚ
This is happening because DSPGraph is internally using the temp job allocator when dispatching to other threads, and the frame rate (when not vsynced) is âoutrunningâ the dispatcher, triggering the allocatorâs leak heuristics.
We plan to have this fixed for the 2019.2 preview - for now, I donât have a better workaround for you.
Wow, this looks really exciting! Audio hadnât progress much from since 5.0.0. Canât wait to SIMD all the mixing and effects. I am so excited that I just went and search for my signal textbook from university.
As far as letting the user choose how they hook into the audio callbacks, I already have a use case for this since I am using a custom audio engine in C/C++ and wanted to bring my samples up into and mix with Unity.
I will definitely be trying this out when it gets some docs.
Personally, I cannot wait to finally be able to set non-zero loop starts in my audio clips without having 3+ audio sources acting as one. Even if I have to pull open the codebase and create my own output component/clip type, at least thatâs (possibly) going to be an option now. (And Iâm kind of looking forward to doing it, honestly.)
In UnityCsReference, if you use the latest 2019.1 branch you could already see the backbone new audio stuff in âAudioâ folder. (Many are newly added from diff to 2018.3 branch, seen as green in the picture) Fortunately that Unity is transitioning to visible C# code we can study in the mean time without ECS in the way here. (How a sample provider could stream us bytes for DSP graph, etc.)