Stream audio from disk

I know this has been asked before but I’m trying to study it more in depth. First off what I’m trying to do is to read audio data from external files at runtime, not from Resources and without using the WWW class.

The reason for not using WWW is that it allocate a lot of memory for few files and in my game I’m adding modding capabilities where users can add their own sound, mind that for sound I don’t mean a track list of music but also sfx, so it may happen that there would be a lot of sounds loaded for a scene. I have already a system in place to load unload stuff not used in the scene but still I would like some way to stream fiels directly from disk for some cases such ass looping effects that may weight more.

In my research I strumbled into a discussion on stackoverflow that I can’t fseems to find now, but someone posted this function to convert bytes of a file to floats and normalize them from -1 to 1.

    static public float[] ReadAudioBytes (string fPath) {
       string data = NM.IO.Path.GetExistingPath(fPath);

       byte[] b = System.IO.File.ReadAllBytes(data);
       float[] f = new float[b.Length];

       for (int i = 0; i < f.Length; i++) {
         if (BitConverter.IsLittleEndian)
           Array.Reverse(b, i * 4, 4);
         f[i] = BitConverter.ToSingle(b, i*4) / 0x80000000;
       }
       return f;
     }

I modified it a little for my needs and I apologize for not mentioningg the original author but really can’t find the post anymore. Anyway, it works and all but still doesn’t solve the problem of allocation and is kinda a sluttering mess when loading many files.

I tried to research more but always ended up with the same stuff on my hands, so I’m asking if there is any way to stream audio data from local disk (I’m reading all external data from StreamingAssets) and if not how I could minimize the memory allocation.

Maybe this require an external library?

Thanks.

Hi Neurological,

Loading wav files from disk is quite easy and could be done efficiently. Compressed audio is another matter: you’ll need a library to handle decompression of at least a few codecs.

The WWW class when set to stream from disk isn’t that bad, but you won’t get very accurate playback.

I have implemented a wav reader in G-Audio which enables sample accurate playback, it’s not in the current release but could be integrated very quickly.

G-Audio iOS already has it’s own custom reader, which handles compressed audio too but is not as precise. It is mainly meant for feeding data on demand to a time stretching / pitch shifting algorithm ( not mine, Dirac ).

Summing up: it is possible( not casual ) to implement a custom streamer, but you’ll have to go through OnAudioFilterRead to feed the read data into Unity’s audio buffer.

Cheers,

Gregzo

I guess I can deal with allocating ogg vorbis music, in the end I need a max of 2 tracks per scene and after that everything get celared, but for the rest the problem persist.

I was checking your asset and is interesting. Even if the wav reader is not yet implemented, what is going behind the scenes? I mean, for what I understand you don’t parse a wav file assign to a AudioClip and play it back, but you have some sort of custom streaming system that reads the file on the go without using AudioClip, how performant is by let’s say 50 sounds at once?

I’ll keep an eye on the asset anyway, I’m usually not a fan of using third party stuff, but for audio and what I’m trying to do seems to be a hard journey and I’m not sure I’m capable to come up with something myself.

Just a thing I forgot to say on my first post, I’m using only two formats. Uncompressed WAVE 16bit 44hz Mono for all sound effects which are directional so I guess your custom sound playback won’t work without AudioClip but I see you took care of it by connecting your system to AudioClips too, and OGG Vorbis 128 kbps Stereo for music playback.

Only few sounds in WAVE Stereo are used, for UI interections and global playback.

Hi,

Yes, G-Audio does it’s own mixing, no AudioSource ( well, just one to get access to OnAudioFilterRead ).
This has many advantages - sample accurate playback, zero garbage collection sample processing( I’ve implemented my own allocator to re-use float chunks ), sending or retrieving audio streams to/from native plugins( G-Audio iOS supports Dirac time-stretching / pitch shifting ), modular I/O, and lots more. It is primarily meant a 2D audio system, but as you noted AudioSource data can be fed to G-Audio’s mixer too.

For UI audio, it’s in my opinion much easier to work with G-Audio than with Unity’s: sounds don’t cut each other, no need to manage AudioSource pooling. Just call Play, and you’re done.

50 sounds at once: I’ve pushed it up to more than a hundred simultaneous sounds on iPad 3 - cpu usage increased by about 30% in that extreme test.

Audio is pre-loaded in sample banks. For now, G-Audio uses AudioClip to load data and then extracts the raw data and stores that in GATData containers. I’m working on a custom async loader to handle loading ogg and wav in a much more efficient way - as it stands, loading will generate quite a bit of garbage and should be done at the beginning of a scene. With the new loader, next to zero garbage will be generated. It’s already implemented, but not fully cleaned / tested yet. Will be in G-Audio 1.3.

G-Audio is fully Unity audio compatible, so mixing and matching isn’t a problem: positional audio with Unity’s AudioClip / AudioSource tandem, and 2D audio with G-Audio.

If you’d like to see some of my code, I’ve just released a free iOS Dirac Plugin. True pitch shifting of any AudioSource, including microphone and AudioListener. Link in my signature.

Cheers,

Gregzo

Thanks for the info, looking forward for 1.3 release.