Hey folks 
If you’re new to the thread, please read this first before watching the video above. Thanks 
It occurs to me, that I haven’t done my best explaining what resonance is, or what it does. I will attempt to rectify that now. First, a few points:
- If you are happy with your current process of finding decent, loopable sections of an audio asset, in some cases could be as many as 20-50 loops/cuts per 1 finished asset, as well as finding an appropriate editor, and cutting up, implementing and maintaining those assets just to end up with something that has to be pitch shifted and fading in/out, to approximate the frequencies between them, then read no further.
- If you are happy with the current triggering semantics, such as they are, and don’t have a case where it’d be nice just to have the audio literally fire an event to let you know you have hit a marker or region in the audio to perform some other process, then read no further.
- I liken the audio part of a project to a bass player in a band. You really don’t hear a great bass player, you just hear the great bass in the song, but a bad bass player? You hear him immediately. Audio, sound effects and music in general in games is often an after thought in my humble opinion. Folks focus primarily on the visual eye candy or physics, and then pitch shift their audio assets so that it looks good, feels good, but sounds like a bad 1980’s Mario game. But hey, if that’s your thing, then read no further.
If you’re at this point and still reading this, then the assumption is that I’ve pique’d your curiosity a bit. Good 
Now for some technical stuff. If you don’t care about this part, then you might wanna skip this section 
#region Tech Stuff
From here on out, I will explain things from the point of view of creating a racing game and creating believable/realistic engine sounds and my odyssey of trying to solve it. This topic, believe it or not, is actually extremely difficult to achieve and there are only a handful of games/developers that actually pull this off in a believable manner in my opinion, and even then, their solutions are mostly in-house or cost major dollars to license, and that’s even if they license it. CodeMasters and their F1 series comes to mind. Another might be iRacing etc. There are others, but i won’t list them all. There are a few different current solutions that I’ve tried:
- The basic “find 3-4 loops method”. This was my first approach, and while it works, man does it sound nasty to me. When i stomp on the accelerator as-it-were, or just accelerate in general, i want to feel/hear the power while I’m doing it, and that definitely wasn’t the case.
- FMOD’s engine designer which I initially tried to use, and was cool, but I found it extremely fiddly, time consuming and ultimately frustrating, trying to find decent loopable sections in the source asset, at the frequencies I wanted, for the durations that I wanted, without sounding like they were cut. And believe it or not, there are just some audio assets where this is next to, if not actually, impossible. And even then, I imported them into the designer and had to spend yet more time adding them, fiddling with lining them up, fade in/outs etc etc. I must’ve spent several hours/days tweaking and trying to get it all sounding right in the editor. Funny thing is? Up until that point, I never really thought about how to get all of that work into Unity and fully realized. I found a few small non-detailed tutorials, and after looking at what the requirements would be, and the licensing: http://www.fmod.org/sales/, I ultimately gave up. I like to consider myself fairly intelligent, but man, don’t developers try to actually use the stuff they intend other folks to use?
- Rev by CrankCase audio. Cool little demo, and it says that it’s for Unity on their site, but I got no responses to my emails asking for an actual Unity demo or trial or anything else. Add to that, the fact that as of this writing, the licensing is daunting. Pricing | Audiokinetic There was licensing the to consider and additional packages to install, namely WWise from AudioKinetic, and all with limited documentation and tutorials on how to do so in a relatively easy manner. Maybe it’s been updated now, I’m not sure, but even if so, the licensing makes it a complete non-starter for me.
- AudioMotors by AudioGaming. Pretty cool, and very similar to Rev, however again, nothing on how to actually get anything into Unity or even if they support it. Evidently, AudioGaming and FMOD are somewhat partnered, but i didn’t even see AudioMotors offered for Unity at all on FMOD’s site. The demo is cool, but extremely fiddly, especially when trying to set the ramp. Factor in the price/licensing, and it was another non-starter for me.
And that’s it. It was all i could find after literally weeks of searching. Now, that all being said, I think each offering have their good points and their bad points. And I’m not naming them to trash them in any way, it’s just that they didn’t live up to what I wanted/needed in my personal experiences.
So. What to do now? I just couldn’t let it go. There had to be a way to:
- Play an audio asset, in this case a car engine, without losing it’s fidelity. Every engine has it’s unique individual harmonics, and I don’t want to lose those by having to pitch shift loops and thus ruin it in my opinion.
- I didn’t want to have to spend umpteen hours prepping and cutting and searching to find acceptable loops, only then to have to try to manage those individual assets not only during development, but deployment and updating later as well as the many known and unknown limitations of playing them back in Unity.
- I wanted to somehow, synchronize an ‘accelerator pedal’ or physics engine to the audio. Especially when the pedal is ‘flat’ or at a sustained speed, how to loop that at any place in the audio without compromising the first few points?
- Needs to be fast and accurate.
- I don’t want to have to lug around a bunch of monolithic audio assets. Is there a way to easily dynamically generate or play just certain sections of them as well? And if so, it’d be really cool if i could somehow get events from the audio itself to use as triggers for other things.
- K.I.S.S. (Keep It Simple Stupid) or at least as simple as possible. This is also known as developer laziness, of which I am fully guilty. I’ve spent weeks writing an application to perform an automated process, that only takes 5 mins to do, because in my mind, “I might need to do it again at some point in the future”. I think all developers have that mindset to a certain extent as well as the abhorrence of creating assets to begin with. Any kind of asset. If that weren’t true, there wouldn’t be a name for it. It’s called “developer art”. LOL. Be honest. We tend to think in terms of place holders instead, right?
Still with me? Because if you’re anything like me, a technophile, even a little bit, then I’m about to blow your mind. I’m going to reveal what I think is one of the greatest kept secrets in gaming, let alone audio. And believe me when i say, I know I’m not alone with my list above. I’ve seen all of the posts on this very forum, let alone other places, asking how to do each thing.
Ready? Asynchronous Granular Synthesis. Yeah, I know. It’s a mouthful. And I’m sure at this point, some folks’s eyes just glazed over, but please, bear with me. It’s about to get good 
What is it? Well, basically it’s a way of defining a specified FFT (Fast Fourier Transform) ‘window’ or section in the audio that incorporates a group of minute sized individual sections called ‘grains’. It then layers these grains based on some parameters such as density, length etc etc. The following links are part of the many, MANY whitepapers I’ve read over the last year, and will explain it with much more detail and clarity than I could offer:
http://www.media.aau.dk/~sts/ad/granular.html
http://www.sfu.ca/~truax/gran.html
http://www.music.mcgill.ca/~gary/307/week4/granular.html
http://www.cs.au.dk/~dsound/DigitalAudio.dir/Papers/BencinaAudioAnecdotes310801.pdf
http://www.camil.music.illinois.edu/Classes/404A2/2/granular.html
Cool. Did you get a better understanding of what the tech actually is? The last link explains it best in my opinion because it’s a bit more graphical 
#endregion Tech Stuff
Did you also notice the mind blowing secret?
This technology has been around forever AND it meets ALL of the criteria on my list, and ALL while the audio is actually playing. Now, I’ve been not only a developer for about 30 years, but a musician as well. I’ve played guitar for almost 35 of my 50 years. I’ve used Sonar as well as ProTools to record. and engineer and they always had the coolest patches and plugins, but I never made the connection. Even though they were running real time, because I was recording, I never thought of them as ‘real time’ in the same sense that I do when talking about games. And then it hit me. No one, was really taking advantage of this technology. Not really. And the ONE place that could benefit the most, would be games. If you trust it to record with, then why wouldn’t you trust it to play audio in your games with?
The possibilities started to hit me. Separating time stretching from pitching. Dynamically changing bpm in just a section of the playing audio without changing the rest. Not just looping, but dynamically define them while the audio is playing. On and on. Literally. For a few days, I simple let my mind gorge on the possibilities.
My absolute first thought after my epiphany? How the hell can folks not know about this? Am I really the only one that didn’t know? Why hasn’t someone built something actually usable in a game engine with it? And if they have, why the hell isn’t there an API/SDK to take advantage of it? Are my google skills seriously that bad?
The answer to one of those questions: is a big emphatic ‘NO’. The big boys have not only known about this tech, but remember those in-house solutions I mentioned above? Remember the big licensing prices? Some of them use it. I’m pretty sure the World of Tanks guys use it. Listen to the tanks next time you play. I’m fairly certain that the CodeMasters guys use it. Listen to the engine sounds of F1 2013. How are they able to cycle, and sustain, the corresponding engine audio to exactly the positions of the throttle? Even if I hold the accelerator at the same spot? I hear no discernible pitch shifting. you’re trying to tell me that they prepped 100’s of clipped loops? per car? But here’s the thing: After spending a year researching, writing code and testing. It doesn’t matter what they use. Shrug. who knows, maybe i stumbled upon something unique. Again shrug. All i know is that it works. And it works well. And not just for engine sounds. This tech works on all audio. Ambient, cyclical, you name it.
I’ve spent a considerable amount of time fine tuning, performance tuning and testing. Writing code to get it into Unity. Writing code to get around limitations or problems IN Unity.
Let’s be clear.
resonance.RT isn’t a clip player. It isn’t a full featured audio solution. It isn’t a mixing solution. It isn’t really a fliter either. What it is, is a multi-threaded DSP audio component built specifically to BE a granulator. Not as an afterthought, or as some squeezed in capability, but from the ground up. It performs it’s calulations, in a multi-threaded ring buffer, and then when an audio source, be it FMOD, G-Audio, MasterAudio (great assets by the way) or any other source or asset, requests a buffer, they get back a fully granulated buffer for that position in the audio.
I built this with the intention of using it with ambient effects such as wind, or ocean shore lapping sounds, or explosions, or engines; be they car, race, motor cycles, tanks, helicopters, propeller planes, jet engines etc etc ALL with the thought that I would want them to be able to be varied, in code, at run time, in response to what happens in game, without having all of the clipping, looping, prepping, managing, deploying or updating potentially 100’s of monolithic files.
Try doing all of that with static audio assets.
Ok, so now you know what it is in that video above. If you haven’t watched it, shrug, give it a shot. Few more minutes won’t kill you 
Again, my apologies for not explaining or being clear. It was my wife, this morning, that explained it to me and made me realize 
Btw, kudos and thanks must go to gregzo. You’ve helped tremendously whether you know it or not. Your posts, comments and answers have helped me to not only fix, but avoid or work around tons of issues.