onAudioFilterRead OAFR function can only run one instance in Unity?

I have two scripts generating a sine wave through onAudioFilterRead on different notes, on two different game objects.

When i run the scene, only one of the OAFR can be heard. if i uncheck that script, suddenly the other one starts playing, and when i restart it, it stops the other one again.

Does that mean that OAFR is a monophonic function?

I presume that it is, because to have many running you just have to add the audio together into one function, and there is no reason why they would include a mixer on unity’s sound card function.

Hi,

Can you post your code?

The behaviour you’re describing is either a bug ( which version of Unity are you using? ), or the result of some mistake in your code.

Cheers,

Gregzo

I’ve had that happen when I tried multiple audio sources on the same object. I expected them to be additive.

Other wierdnesses you might encounter are panning issues and one that I found slightly surprising. OAFR callbacks are indeed on a different thread, what I found surprising was that each callback is on a new different thread, not just a different thread.

It’d be nice if we had an instanced callback with each audio source given it’s own dedicated thread at min latency consitantly ala a mix of the audio clip callback and OAFR.

I’m using the latest version of unity, Here is the simplest possible code using the OnAudioReadFunction that streams a sawtooth, you can change the tone of the audio to different frequencies, and can place the script twice on the same object or many times on many game objects, the code only runs once, only one tone and one amplitude of sound can be heard.

I don’t think it’s possible that the monoscript compiler is not making the same compiliation as in .cs, you can try i think the bug still happens.

Do you think that it’s becaue i am using unity free version? perhaps there are audio limitations like only one filter per audio source or something?

var tone : float = 2048; //change this to hear different sounds polyphonically
  function OnAudioFilterRead(data:float[] , channels:int )
  { 
    for (var i = 0; i < data.Length; i = i +channels)
    {
        data[i] = (i/tone)%1;
        data[i+1] = (i/tone)%1;
    }
  }

Hey marionette, thankyou, how can you see that OAFR callbacks are on many different threads? Does that mean that the code runs many times but that it is only heard once by the soundcard?

As it’s a soundcard buffer input code, i would expect it to be just raw, write data to DirectxSoundBuffer32 implementation in unity or something of that type, i don’t know why the code would sound mixdown capabilities.

What do you mean by function callbacks? i don’t understand the idea of callbacks very well, i just thought that the function was called to run async from other code and to input an array. Thankyou

Another absolute clarification needed is: Is the OAFR callback function itself a pro only feature or not. I understand that the filters are, but need a definitive answer regarding OAFR is needed imo.

To check the threading: if you are using c#, add this to the OAFR callback. I also use visual studio to write my code as well as the Unity’s 2013 tools to attach to unity. You can see the threads spawning like crazy.

Debug.log (Thread.currentthread.managedthreadid);

You’ll need to reformat that of course and check for spelling. I’m on my mobile atm.

Think of Function callbacks as events with ref parameters in this case.

OAFR is more static while the audio clip creation allows you to define an instanced method for it’s callbacks

One reason the do the OAFR the way they do is to ‘stack’ the DSP with filters. What we need is a way to populate data similar to the way that portaudio does it. A single dedicated instanced callback per source.

That way I could implement ISyncronizationContext and be able to send events back to the main thread if I wanted.

void OnAudioFilterRead(float[] data, int channels)
    {
        Debug.Log(Thread.CurrentThread.ManagedThreadId);
    }
1 Like

Hi,

OAFR is not a pro only feature.

The problem with the OP’s code is that each component will overwrite data from the previous one. Either have only one OAFR implementation per audiosource, or mix additively to the data array.

Ahhh gotcha, and thanks for the heads up on OAFR.

Since I have you, and not to hijack the op’s post, but may I send you a pm about a separate issue I’m having?

How do you assign only one OAFR implementation per audiosource. I found that it runs without an audio source. I tried putting an audio source on every OAFR making gameobject and audio source was a sample and not interacting with the OAFR.

The reason why it generates 1000 of threadId’s must be because it is interacting with a different processor that the thread processor. it perhaps has to change thread all the time so that if the thread hangs the sound doesnt stop.

@Marionette
Feel free to pm, but do link to a forum thread you could start for the issue: nice to share info with all!

@drudiverse
Afaik you need an audiosource for oafr to fire. It’s clip can be null. Maybe this changed lately and I wasn’t aware, but I doubt it.

About thread id: I also doubt these strange results. The data array is recycled between all audiosources, and it would make little sense to start a new thread for each… Try comparing threads by reference, see what happens.

or they might be delegates for the callback. shrug. all i know is when i wanted to fire an event back to the main thread, i started implementing a synchronization context and that’s when i found that different threads were being spawned per callback…

theoretically, i would’ve thought that there should be only 1 different thread id each time it’s called, but it’s not, as you see.

btw, I also requested something in feedback. If you agree, please vote :wink:

http://feedback.unity3d.com/suggestions/instanced-pre-spatialization-audio-callback

@Marionette
About getting pre-spatialization data: you could hack around it by implementing your own AudioClippish container, pipe audio data via OnAudioFilterRead, and piggyback Unity’s spatialization calculations by using a tiny AudioClip of buffer length filled with only 1.0f values.

You then multiply your own audio data by the spatialized array of 1.0f, et voilà: 0re-spatialization callback and / or spatialization of your own generative/pre rendered audio data.

Not elegant, but worth a try. Of course, any roll off should be applied after, and not on the dummy AudioClip.

Hmmm… i’ll have to check it out. thanks :wink:

i just hate having to do it this way, especially since UT already have what we need ;(

I’m toying with the idea of using an in line ambisonic solution… we’ll see, the journey begins tonite :wink:

Sorry Gregzo, 1/It doesn’t need an audio source 2/It can’t run polyphonically even in the simplest case scenario. please only affirm that it can work polyphonically if you have a clear memory of running it so in the past, as i have kept going back and forth in unity to test what you say and it seems that unity has been updated/OAFR never had an integrated mixer function. Unless you know a trick to run many OARF in parallel we can conclude that it doesnt work polyphonically, it doesnt need an audio source, it does need a listener, and the OAFR function does produce a new thread ID every time it runs, 521,523,524,525,526…1211,1212,1213…, which is something to do with the sound API.

What would be great is if there was a Execution Order of Sound Events unity reference page, so that we can see how audioclips, filters, and OAFR’s are arranged.

Unless proven otherwise, the answer to this question appears to be No!

Hi,

I think we might live in parallel universes.

In mine, OnAudioFilterRead requires an AudioSource to be called.
Edit: It just occurred to me that you might be adding your scripts to the listener, in which case OAFR will run too, but will overwrite all audio data. most probably that’s your issue.

In mine, I attach the following script to any reasonable number of game object and it works.
I’ve been working with OAFR since the method appeared in Unity 3.5, and I can assure you all I write here is backed up by recent tests, on Unity 4.3, 4.5, 4.6 and 5.0 beta 13 and 16.

Try this component on empty, virgin game objects:

using UnityEngine;
using System.Collections;

public class SineTest : MonoBehaviour
{
    public float frequency = 440f;
    public float gain = .1f;
    public bool  addAudioSource = true; //Set to false = OAFR won't be called
    AudioSource _source;
 
    float _increment;
    float _phase;
    float _cachedFrequency;
 
    int _sampleRate;
 
    void Awake()
    {
        _sampleRate = AudioSettings.outputSampleRate; //Can't call from audio thread, so cache here
     
        if( addAudioSource )
        {
            _source = this.gameObject.AddComponent< AudioSource >();
        }
    }
 
    // Simple sine gen, phase not properly updated on freq change.
    void OnAudioFilterRead( float[] data, int channels )
    {
        int i;
        float sinVal;
     
        if( channels != 2 ) //Quick test supports stereo only
        {
            return;
        }
     
        if( _cachedFrequency != frequency ) //recompute increment only if freq changes
        {
            _cachedFrequency = frequency;
            _increment = frequency * 2 * Mathf.PI / _sampleRate; //set increment
        }
     
        for( i = 0; i < data.Length; i += channels )
        {
            sinVal = Mathf.Sin( _phase ) * gain;
            data[ i ] = sinVal;
            data[ i + 1 ] = sinVal;
            _phase += _increment;
            if( _phase > ( 2f * Mathf.PI ) )
            {
                _phase -= ( 2f * Mathf.PI );
            }
        }
    }
}

Edit: Corrected phase resetting.