G-Audio Tools( GAT ) - Releasing soon, suggestions welcome

New clean support thread here:
http://forum.unity3d.com/threads/223729-SUBMITTED-G-Audio-2D-Audio-Framework

https://vimeo.com/83722546

____Webplayer: ____Little tour of the 6 example scenes provided with G-Audio. All scenes feature abundantly commented code. Built with Unity Free.

What is GAT?

G-Audio Tools is a 2D audio framework that enables much lower level control over audio playback than Unity’s audio API.

What does it do that Unity doesn’t?

-Real time filters for Unity Free: low and high pass and shelf, peak and notch, distortion and LFO already implemented. Filters can be controlled in realtime, per playing sample or per track, or applied to audio data for cached pre-processing of samples.

-Sample pre-processing: fade-in, fade-out, reverse, normalize or pitch shift whole samples or chunks, accurate to a single sample.

-Stop playback of a sample without adjusting volume : smooth stop in less than a frame.

-Full panning control: need a sound to travel from one speaker to another in a 7.1 system? GAT has you covered.

-Next to zero garbage collection: creating and destroying AudioClips on the fly can create heavy GC spikes ( and framerate drops ). GAT does not use AudioClips and pre-allocates memory so that garbage collection is kept to a minimum.

-Automatic mixing: all played sounds are mixed on one single AudioSource. Just tell a sample to play, it will never cut playback of another.

-Route samples through tracks ( inspector friendly, apply an effect to more than one sample at once ), or play them directly.

-Pre configured FFT for spectrum analysis

Who is it for?

-Intermediate to advanced programmers who need more control over audio playback

-Anyone who’s making an iOS app where audio is at the forefront

-Anyone who’s ever tried to make a sampler or a sequencer in Unity

When is it coming out?

-v1.0 is waiting for review by the asset store team.

GAT is fully compatible with Unity’s Audio API: you may use it along with standard Unity audio playback.

Sample processing sounds interesting for use in tracker music to keep size down, is that possible?

It sounds like a really useful tool. I’ve had a lot of issues with audio in unity, mainly because my project requires me to load audio files from memory, which is something unity has little to no support for. My current setup uses NVorbis and feeds in through a set of AudioSource filters, but it’s very unreliable and I’m definitely looking for alternatives. I imagine this isn’t the purpose that GAT was designed for, but I’m interested if it would work for this situation.

So I’ll ask: Which platforms does GAT run on? Additionally, can you play a clip (like an .ogg or similar) from memory or disk via a memory/file stream or byte array?

Can it’s work with any 7.1 or 5.1 devises.

Hi Woodlauncher,

Unity already supports tracker modules - http://docs.unity3d.com/Documentation/Manual/TrackerModules.html. Do you mean to ask if it would be possible to add sample processing to playback of .mod files? If that is your question, I really don’t know, I’d have to look into it.

Cheers,

Gregzo

Yes it can, plus you can easily control how much of a mono sample is fed to any channel.

Hi Doddler,

At launch, GAT will only support uncompressed pcm samples. It’s purpose is real-time sampling / sequencing etc… so compressed audio would mean too much overhead for these use cases. It does have classes to load and unload pcm samples from Resources or from disk, and manages a configurable memory buffer so that all allocations are super quick and the garbage collector never kicks in.

Platforms: all, as GAT is fully written in .net.

Sounds cool! We would like to know - will this compete with our product Master Audio? Or is this something where people might use both at the same time? So far it seems the latter. We don’t really add features to Unity audio that aren’t already there.

Hi Jerotas,

In many ways, GAT sits at the opposite end of the spectrum compared to Master Audio. It is lower level, doesn’t feature any event-like tools, and is aimed at music making in Unity. I wouldn’t use it in a tradditional game audio setting, but couldn’t do without it for all my generative music projects.

So no, not a competitor, and yes, both could be used together. GAT does all it’s mixing on one single AudioSource via OnAudioFilterRead, and provides the client with reading callbacks allowing real-time processing or analysis. Samples can be pre-processed on the main-thread and cached, or manipulated directly on the audio-thread. It is about more control, not more ease of use!

VERY awesome! Definitely on my watch-out list.

Yes, that is what I meant. I haven’t looked into it very deeply so I don’t know if maybe it’s already supported in one of those formats, but if you can pitch shift samples that would really help keep filesizes down.

Unless there is some big flaw with the tracker formats/tracker music in general that I’m missing I do not see why anyone wouldn’t use tracker music for games).

From the unity docs ( link I provided earlier ): Tracker module files differ from mainstream PCM formats (.aif, .wav, .mp3, and .ogg) in that they can be very small without a corresponding loss of sound quality. A single sound sample can be modified in pitch and volume (and can have other effects applied)

So it looks like your covered already.

GAT is more meant for real time stuff: instruments, generative music, etc…

This is grand as a concept, certainly a pal would make this and i see no reason why music generation cant be game based. One of my fave asset buys was for audio analysis, and driving values based of frequency and so on, would this be suiable or is it chicken and egg anyways as you’re driving the creation yourself. I havent seen much updates to the old asset which is sad and i’d love something that was aimed towards synthesis (although not so much a personal definite buy, my friend though, hed go mad for it) but is there any aspect of analysing the spectrum for driving values? I have an example whih i wont post unless asked that reflects it and my pal on seeing it said 'wouldnt that be good in a slightly different situation mixing synthesis too, we made plans but bugger all idea how to synthesize the sounds as it stood. So yeah, i dont know if its in your remit or the product or its even relevant but just wondered if it would be considered s future feature or it would be expected for the user to supply such functionality

Hi lazygunn,

I’ve already dabbled with spectrum analysis ( FFT ), and have got a pretty fast C# implementation already running. Any sample that you play, you can ask for a copy of the data before it plays and draw it, fft it or whatever suits your needs.

FFT is quite resource intensive, though. Better to do it only on the final mix, which certainly is doable.

I’ll be checking it out for sure when it’s out. Thanks for the info!

Ahh you know, i dont think it was even as heavy as that but thats interesting, well assuming FFT is the same FFT driving my 3D ocean waves in concept (i googled just then haha, fast fourier transform it is) - and in that regard (Given i’ve seen already GPU processing becoming big in audio analysis), does it make sense, given any bottlenecks, to have this on the GPU - i’m only just learning all this stuff after a lifetime of keeping my ‘art’ away from heavy programming concepts, so forgive ignorance, but I am curious, especially using and learning about compute in different fields.

Moving fft calculations to the gpu is not really doable without severe headaches and compatibility issues in Unity…

If what you need is frequency band values in realtime to have graphics react, a simple FFT of the final mix is plenty enough, and is perfectly acceptable performance-wise even on mobiles. Bear in mind that as the audio buffer is, by default, 1024 samples long, fft will not give you a super high frequency resolution. But enough for most use cases…

Cheers,

Gregzo

Righteo! Was just curious, main compatibility thing I think atm is its a no go with anything cept directx (And definite headaches getting anything to the cpu) but I find the topic of interest anyways. Next time my pal pops up i’ll suggest he look at this thread anyways

I’m currently making a vid to showcase the basics of the API. Should be available tomorrow at the latest…

Quick update:

Working like a mad lemur today, implementing a higher level API with nice and comfy Unity components for users who don’t want to join me in the rustling of leaves among the intricated foliage where I dwell.

Good news, it’s really useful!

-Sounds can now be routed through tracks, which have gain, pan and effects applied to before mixing.
-As many tracks as you can muster without crashing your machines!
-Sounds can still be played on their own, with their own filters and pan control. You can also have a sound with one filter routed through a track with another.
-Found a cute little simple lemury distortion on the web, implemented it. Nothing fancy, but super fast.

Comments welcome,

G

P.S.: vid will come when I have the higher level API ready, probably tomorrow. Made one during the night, only to realise I really needed to work on some higher level controls…