Send multiple audiosources to smae target in Native audio routing plugin

Currently it isn’t possible to send more than one audio source to the routing plugin demo as the second source will overwrite the first one. Is there anyone who can (help) changing the routing.cpp file so it adds the new data to the existing data instead of replacing it?

Here’s the code (you’ll find on bitbucket as well) :

#include “AudioPluginUtil.h”
namespace Routing
{
const int MAXINDEX = 128;
extern bool bypass = false;
enum Param
{
P_TARGET,
P_BYPASS,
P_NUM
};
int bufferchannels[MAXINDEX];
RingBuffer<65536> buffer[MAXINDEX];
RingBuffer<65536> readbuffer[MAXINDEX];

struct EffectData
{
float p[P_NUM];
};
int InternalRegisterEffectDefinition(UnityAudioEffectDefinition& definition)
{
int numparams = P_NUM;
definition.paramdefs = new UnityAudioParameterDefinition[numparams];
RegisterParameter(definition, “Target”, “”, 0.0f, MAXINDEX - 1, 0.0f, 1.0f, 1.0f, P_TARGET, “Specifies the output that the input signal is routed to. This can be read by scripts via RoutingDemo_GetData”);
RegisterParameter(definition, “Bypass”, “”, 0.0f, 1.0f, 0.0f, 1.0f, 1.0f, P_BYPASS, “Bypass”);
for (int i = 0; i < MAXINDEX; i++)
buffer*.Clear();*
return numparams;
}
UNITY_AUDIODSP_RESULT UNITY_AUDIODSP_CALLBACK CreateCallback(UnityAudioEffectState* state)
{
EffectData* effectdata = new EffectData;
memset(effectdata, 0, sizeof(EffectData));
state->effectdata = effectdata;
InitParametersFromDefinitions(InternalRegisterEffectDefinition, effectdata->p);
return UNITY_AUDIODSP_OK;
}
UNITY_AUDIODSP_RESULT UNITY_AUDIODSP_CALLBACK ReleaseCallback(UnityAudioEffectState* state)
{
EffectData* data = state->GetEffectData();
delete data;
return UNITY_AUDIODSP_OK;
}
UNITY_AUDIODSP_RESULT UNITY_AUDIODSP_CALLBACK SetFloatParameterCallback(UnityAudioEffectState* state, int index, float value)
{
EffectData* data = state->GetEffectData();
if (index >= P_NUM)
return UNITY_AUDIODSP_ERR_UNSUPPORTED;
data->p[index] = value;
return UNITY_AUDIODSP_OK;
}
UNITY_AUDIODSP_RESULT UNITY_AUDIODSP_CALLBACK GetFloatParameterCallback(UnityAudioEffectState* state, int index, float* value, char valuestr)
{
_EffectData
data = state->GetEffectData();_
if (index >= P_NUM)
return UNITY_AUDIODSP_ERR_UNSUPPORTED;
if (value != NULL)
*value = data->p[index];
if (valuestr != NULL)
valuestr[0] = 0;
return UNITY_AUDIODSP_OK;
}
int UNITY_AUDIODSP_CALLBACK GetFloatBufferCallback(UnityAudioEffectState* state, const char* name, float* buffer, int numsamples)
{
return UNITY_AUDIODSP_OK;
}
UNITY_AUDIODSP_RESULT UNITY_AUDIODSP_CALLBACK ProcessCallback(UnityAudioEffectState* state, float* inbuffer, float* outbuffer, unsigned int length, int inchannels, int outchannels)
{
EffectData* data = state->GetEffectData();
bypass = (data->p[P_BYPASS] >= 0.5f);
Routing::bypass = bypass;

memcpy(outbuffer, inbuffer, sizeof(float) * length * inchannels);
int target = (int)data->p[P_TARGET];
if (!(state->flags & UnityAudioEffectStateFlags_IsPlaying) && (state->flags & (UnityAudioEffectStateFlags_IsMuted | UnityAudioEffectStateFlags_IsPaused))) {
Routing::buffer[target].SyncWritePos();
}
bufferchannels[target] = inchannels;
for (unsigned int n = 0; n < length; n++) {
for (int i = 0; i < inchannels; i++) {

buffer[target].Feed(inbuffer[n * inchannels + i]);

}
}

return UNITY_AUDIODSP_OK;
}
}
extern “C” UNITY_AUDIODSP_EXPORT_API void RoutingDemo_GetData(int target, float* data, int numsamples, int numchannels)
{
if (target < 0 || target >= Routing::MAXINDEX || Routing::bypass)
return;
int skipchannels = Routing::bufferchannels[target] - numchannels; if (skipchannels < 0) skipchannels = 0;
int zerochannels = numchannels - Routing::bufferchannels[target]; if (zerochannels < 0) zerochannels = 0;
for (int n = 0; n < numsamples; n++)
{
for (int i = 0; i < numchannels; i++) {
Routing::buffer[target].Read(data[n * numchannels + i]);

}

Routing::buffer[target].Skip(skipchannels);
for (int i = 0; i < zerochannels; i++)
data[n * numchannels + i + numchannels - zerochannels] = 0.0f;
}
}

Besides not being possible to send multiple sources to the same target ID, it isn’t possible to ‘listen’ with multiple listeners to the same target ID either. As soon as you use the routingdemo_getdata with the same target ID on more than one script, all data is lost on both.???

The talk ‘unity audio under the hood’ from Jan Marguc and the talk from Wayne Johnson from 2015 show a bit more info on this, sadly the slides in the video are unreadable (to low res) and are no longer available at Unity3d website. It would be nice to have some more info / documentation on the Native Audio SDK, explaining some more detailed workings of the plugins and framework.

In every talk about the unity audio system Unity encourages people to develop their own plugins and put them in the assetstore, but since the release in 2015, I haven’t seen a native audio plugin in the store yet. (not counting the spatializer plugins which probably use partly same code) I guess this is because of lack of support and documentation.

Maybe some of the other plugins give a solution.

I see the correlationMeter uses the history buffer to provide data to the editor correlationMeter visualizer, which seams parallel to lower the impact it has. Is it possible to use the same historybuffer as a non destructive way to let several c# calls to RoutingDemo_GetData to the same targetID be non destructive?

But as I just am getting to grips with C++, it’s just guesswork for now. (till unity delivers us support / documentation)

there is enough documentation in the mentioned talks and there is a whole repo with example implementation/s
the lack of adoption of audiomixer plugins stems mainly from the fact that there are currently two different audio workflow idioms in Unity (AudioMixer vs. audio components on gameobjects), while:

  • it’s hard/er to produce full platforms coverage with native plugins
  • somehow more cumbersome scripting access to AudioMixer (in that it’s easier from MonoBehaviour accessing GO components)
  • MB components are sufficient for most basic/mediocre complex tasks

They were definitely expecting more widespread adoption, but it somehow didn’t happen due to the above

@r618 I don’t find the documentation enough. Maybe if you are a C++ developer some source code is all you need, but I definitely need more. The talks are for now almost the sole source of deeper information. Sure they talk about exposing variables and stuff, but if you want to dig deeper, there is none. The examples in the bitbucket are helpful, but not commented at all. What can I do with the e.g. audiopluginutil.cpp, how do the history buffers work & how can I expand on these interfaces.

I currently only target windows platform as this is the only platform powerful enough to do the heavy audio processing. So other platforms are not something I care deeply about. And as far as I know, most Native audio effects will already work on android/ios/PS as well, just not all.

Although I can do everything I want using components, it is just too slow to process. I’m reading / writing & manipulating at least 128 audio tracks simultaneously. And when doing effects etc, you’ll quickly run into >100% audio DSP CPU, which is capped I guess as the main CPU is just on 10 % of it’s capacity. But moving over some stuff to C++ makes it a lot quicker and more efficient. So that’s why I need this implementation in stead of the component-based solution.

Oh, I have no doubt AudioMixer is useful (if not necessity) and the audio plugins make total sense if you want more out of it -

  • but if you want to customise it the process is more involved than with other subsystem plugins - ok - fair enough

That’s not true, sorry.
The most widespread - yes.

All effects in shipped AudioMixer work on all supported platforms - that’s their point, but somebody had to made them - if you want to add new you need per platform specific native bits - that’s the problem since the process varies heavily between them

Yes your totally right. I meant a Steam / Oculus VR and audio capable platform. Because that is what I’m targetting.
So I do not need platform specific plugins but just more performance on Windows.

Do you know if it is possible to use LV2 plugins in Unity? Or adapt the source code of LV2 plugins to work in Native Audio?

Something like an expander/gate would be really useful and should be able to work on all platforms as well. The build in compressor is missing ratio control. If unity releases the source code of those effects as well, it should be easy to modify as the multiband plugin example does have ratio. And the ducking could be modified to an expander/gate.

is not unfortunately - Unity does not support any /common/ audio hosts and plugins

I had a look at sources (and couldn’t find out how e.g. expander is implemented using that fancy rdf owl onthology based build system at first glance TBH) -
in any case adapting lv2 would mean ripping it apart completely, taking out just signal manipulation parts and integrate those in separate unity native plugin … - easier said than done proabably ]