DSPGraph current limitations?

I see the project sitting there on github, untouched for a while. It looks like Metasound, which is quite incredible and ahead of its time.

  • What’s the bug list?
  • any hardware not supported?
  • is it built on top of old FMOD?
  • caveats?
1 Like

I’m curious, where do you see the GitHub version?

I use it. It still works in 2022.3 with the latest versions of Burst. There are two bugs I know of with it. One is that sometimes it causes lock-ups on domain reload on code changes due to not shutting down properly. The bug is engine-side, and I captured it stuck in audio c++ from V-Tune. I doesn’t happen all the time and some projects are more affected than others, and I cannot say why. The other bug is that sometimes there’s an issue logged in the editor about GrowableBuffer index being out of range when exciting play mode. This error seems to be benign.

As for the API, it is a little clunky and requires you do some unsafe things to make it work. There’s lots of little pitfalls and performance traps, and the documentation leaves a lot to be desired. You’ll be wishing you had access to parallel-for job types to schedule in the chain. However, it’s command block architecture is very nice once you learn to prewarm kernels. You can do graph configuration from within Burst jobs.

Performance is very good, though there’s an non-trivial overhead from the attenuators and parameter interpolators with every kernel node that can really add up if you try making hundreds of nodes. Fewer and more complex kernels will get you the absolute best performance.

Hope that helps!

1 Like

It’s a dead project.

It does. Thanks a lot! That’s the most comprehensive writeup by a lot shot.
Are you using it in prod? How is stability in build? Any hardware where it tends to crap out?
What are you doing that requires 100s of nodes?
How do you “prewarm kernel”? A job is always transient so I don’t know of a way to keep one spinning more than a few frames before jobsystems bitches at me.

I’m a hobbyist, so my definition of “production” may be a little less than what you otherwise think. But I use it in this project, which you can build for various desktop platforms and try out for yourself. https://github.com/Dreaming381/lsss-wip

I know Windows works pretty well. I’ve made builds for Mac OS and Linux but haven’t determined if they run correctly or not yet. This audio solution is a feature of my framework, so there may be others that have used it on platforms I haven’t personally tested yet.

I’m not, but back in the day there were people who created a node for every audio source they wanted to play. That never quite scaled as well as my solution, which premixes a bunch of audio sources from normal jobs and delivers multi-channel buffers to DSPGraph that applies SVFs, attenuators, and limiters to create the final mixdown. If you are curious, this is the official documentation readme of my audio solution: https://github.com/Dreaming381/Latios-Framework-Documentation/blob/main/Myri%20Audio/README.md

You create a CommandBlock on the main thread that tries to create a bunch of new kernel nodes and kernel updates, and then discard the block. This will cause Unity to register the reflection data for all the kernels so that you can use CommandBlocks in Burst jobs.

DSPGraph is a little weird in that kernels are also jobs, but they are persistent and repeatedly called. And the kernel updates are also jobs but are temporary. Also weird is that CommandBlocks are effectively scheduling jobs from other jobs.

1 Like

Without looking too deeply into it, I’m wondering what happens to audio latency with all these jobs dependencies (I’m wild guessing that how the graph is evaluated if each node is a job). Did you notice delay?

1 Like

There’s a concept of “audio frames”, and your jobs have to complete before the next audio frame arrives to fill in the feed buffer. DSP Graph schedules audio jobs in sync with the audio clock to maximize the amount of time you have to complete the kernel jobs. In addition, you have the option to either have all jobs run on the audio thread or to also leverage the worker threads. The default is the latter, but I typically prefer the former because it is more reliable if I don’t need more DSP time than what is required (I’m well short of that threshold).

As for delays, remember that simulation is always running multiple frames ahead of what is actually being displayed on screen to the player. A consistent frame or two delay in audio probably makes the audio more in sync. But it is hard for the player to notice, honestly. Some audio effects require buffering of the signal to function properly, like my brickwall limiter. Until it becomes noticeably out of sync, you just do what you need to do to make it sound good.

I see, that’s where you want to turn off the number of frame aheads, I forgot about that setting.
You know that video presentation that Unity made? There was a node graph in one of the slide. Is there such thing somewhere in their repo? I have release 11 and don’t see it.

Which video? I remember there being 3 different ones.

Again, where is this repo? I only have the version from Package Manager, which is 0.1.0-preview.22.

But no. The visualizer is not in the latest version.