Hi all!
I’m attempting to write a non-realtime renderer for audio visualizers in Unity.
Grabbing the frames seems okay so far, but I can’t get GetSpectrumData to operate properly outside of realtime. Whether or not called from AudioListener or from AudioSource, it seems to operate on the audio stream itself, instead of the audio file’s data.
That means that when the renderer is operating in non-realtime mode - which it needs to do in order to capture frames at high resolution - the stream fails to be a reliable source of spectrum data anymore, because it’s pausing and unpausing and therefore creating gaps in the analysis data, as if it were trying to analyze an audio file that itself contained bits of silence in between every bit of audio.
The effect on the vis can be seen here - the faulty non-realtime vis is on the left, clearly containing a peak that fails to “fire” compared to the realtime vis on the right.
I get the sense from reading previous posts that GetSpectrumData may not be able to work in non-realtime-mode and it may be necessary to “roll my own” spectrum preprocessor in order to get accurate data; is this the case?
(I’m tagging @jkeogh1413 in this post, clearly one of the most experienced audio coders in Unity based on some previous responses!)
Thanks for any help that the community can provide!
–CaliCoastReplay