[WIP] resonance.SUITE

Hello everyone, I’ve been a huge fan of Unity for a while now and I’ve purchased many items from SGT to model kits to shaders/packs etc, and now I believe it’s time for myself to contribute :wink:

A small warning that this more than likely will be a somewhat large post from me, so for the brave; continue on! :wink:

Firstly, a brief history and the strange trip that lead me here and then I’ll explain what it is, what it’s not and what I hope to accomplish with it.

I initially started working on a game (who isn’t?), but unfortunately being hyper-critical about what I was working on (planetary terrain etc) and the issues that became roadblocks, I quickly found myself neck deep in minutia and unable to focus on the project at hand. Either I wasn’t completely happy with the quality of something or the performance and became increasingly frustrated and unmotivated. So… i did what any good aspiring game developer does, I started (what I thought was going to be) a smaller, more focused project. Something to hone my skills and learn more about Unity. It was to be an overhead, 3rd person networked racing game using your mouse. I had all of my graphical assets in order as well as physics etc etc. What I quickly discovered to my chagrin, was the appalling lack of cyclical sounds. More specifically, engine sounds. Now you may be wondering what the hell I’m talking about. Well, it’s more than just finding, licensing/downloading a car/race engine sound. There is cleanup and looping/cutting to be done, and then trying to figure out the best way to use it at run time. For example, when I ran the Unity car tutorial, it sounded like this:

YUK!. To be honest, I found the pitch changing to be nasty sounding more than a little unrealistic. So, I researched for alternatives, and to be honest, I looked for weeks and found little to nothing. FMOD had something for engines, but I found it so over baked and over engineered for what it was, only to get the results I did, frustrated me even further. I happened to stumble upon REV which was cool, but even that was painful trying to set up a decent ramp, and their demo gives little to no details or help about what it is I was supposed to be doing in their app. I emailed them several times, but never got a response, so, on to the next app. It was decent, but I think they wanted too much money etc, but I started noticing a somewhat similar theme: engine sounds (or cyclically dynamic sounds) seem to be an afterthought.

With FMOD I have to spend many, MANY painful hours trying to find ‘sweet spots’ to find loops etc, and even with some apps allowing me to select zero crossings, still they were almost always non loop able, and THEN having to cut and save and manage and import and tweak etc etc etc. Finally in a fit of frustration, I screamed at the monitor: “there HAS to be a better way! I CAN’T be the only person on earth that’s needed something like this…”. it was then, that I determined what (I thought) it should be after reading so many white papers that I think I’m actually a little blind now:

#1) It HAS to be EASY. Being a solo developer, even IF I’ve been doing this for over 30 years, doesn’t mean I want, or am even capable, of doing everything myself. I take my hat off to those folks, truly. So, the basic premise is to take a wav/mp3 etc and not have to touch it. To be able to let the code/physics engine manipulate the sound from a SINGLE audio file. That way, I cut down a HUGE amount of potential cycles while at the same time increasing the dynamic-ability (hehe couldn’t resist), which leads me to,

#2) It HAS to sound GOOD (or at least plausible).

#3) It shouldn’t cost anyone their left arm or first born children, or some arcane license that will follow them around for the rest of their life.

#4) It HAS to be PERFORMANT.

So, to recap:

What it is: A dynamic C++ run time that utilizes asynchronous grain technology at the DSP level of your sound card, without draining your system of resources. While this technology has been around for ages, little or no real SDK’s exist in the wild and if they do, get ready to have 1 less child, because that’s what it’ll cost you. Most if not ALL examples of companies I found using this tech, have their own locked away proprietary code. Think code masters and the f1 IP. Btw, did I mention, resonance is COMPLETELY non-destructive?

What it’s not: While it can and most certainly DOES play audio files, it’s not simply another player. This has been specifically designed for cyclical sounds that are near impossible to create or obtain nice loops, while maintaining sound fidelity and easing the sound engineer’s nerves.

What I aspire it to become: A cool, cheap, quality API that fills this gap. Funny thing is, during my testing I’ve discovered a few things. This technique would be great for ‘sound atlases’ as well as other types of sounds.

In the video below I provide a prototype for what I’m talking about. ALL the wave files you’ll hear in the video we downloaded from the net. I have done NOTHING to them. Oh, and maybe I’ll make a few bucks as well, otherwise the ole lady won’t let me alone about it :wink:

Anyway, if you’ve made it this far, please have a view and be sure to post what you think about this if you have a moment or two. I’ve been working on this nonstop for the last 6 months, so your comments will help me determine what to do next. I in turn will post more goodies as things progress.

Hopefully I don’t sound grandiose and full of myself, but I will be offering the resonance.SUITE. Comprised of the RT/API, FX and Editor.

I will post more about the technical details as well as a breakdown of each component in a bit :wink:

-Marionette

As promised, some details.

Please bear in mind, this is only an alpha preview. Before going further, I wanted to get some feedback on whether or not the community was interested in something like this.

The red window in the video above is called the ‘grain window’, which based on it’s size, defines the area that grains interact. In the above case, I set the window to roughly 200 milliseconds. By applying further randomization parameters like jittering, randomized grain length and randomized grain density, give a more natural feel. Everything that is manipulated in the video, can and is manipulated by code. What you are hearing and seeing is all real time. The editor is used to tweak settings and create user defined regions which are then saved as a markup file. Those regions can also tell the run time to fire events back to the application or physics engine etc in addition to being markers, such as when the grain window intersects a defined region. The fader on the bottom left, or ‘scrubber’, controls the speed at which playback occurs. (Note, that there is no pitch degradation nor time stretching artifacts because resonance doesn’t use either.)

When moving the grain window with the mouse, speed is arbitrary, however I’ve limited the scrubber to +/- 2x playback speed. This might be tied to a physics engine’s accelerator for example.

If you need to add playback events/user defined regions, you could use the editor to do so or manipulate everything dynamically in code.

A typical workflow would be to obtain/record an audio file. Load it into the editor, set some parameters, create some user regions and save it. No cutting. No searching for places in the audio file to loop. what you saw in the video, is literally what you would do as part of the workflow. For example: in the first part of the video, I would make a user region called ‘startup’ or ‘ignition’ and place it over that part of the audio. I would then set different regions signifying different RPM values to be used for shifting/transmission changes. Using regions in .Net for example is as simple as engine.Regions[“Ignition”].set() to set the grain window position to the start of that region. Other properties would be to loop a region or play once etc.

This technique can also be used on ambient types of sound like wind or water/ocean. Because resonance can dynamically randomize parameters, you might now only need a 5 second audio file instead of a 5 minute file. Further, using regions, you can create sound atlases that are comprised of multiple different sounds. Less memory and performance impacts due to loading/unloading while playing clips. Less potential file maintenance.

Platforms:
Currently windows only, however the run time is cross platform. A Mac version of the editor is slated further down the line.

Development Languages:
C++ and .Net. An objective C version of the API would be made available along with the Mac version of the editor.

Game Platforms:
Unity3D and Unreal Engine at this time. AFAIK, the free version of Unity can also use the run time, but I will need to do further tests to confirm.

FX:
All of the standards plus a few:
Chorus
High/low pass filters
DTMF tone generator
Phaser
Flanger
Pitch control +/- 12
Reverb
Delay
Distortion
Band pass filter
Metronome
Oscillator

Documentation:
In progress.

Tutorials/Demos:
ToDo.

I’m currently trying to time the beta release with Unity5 due to all of the cool new sound improvements, but we’ll see. It also depends heavily on whether or not I’ll have direct access to buffers/DSP through Unity5. (Hint Hint UT :wink:

Licensing/Cost:
Too early in development, however I’m designing this with folks just like me in mind.

-Marionette

+1 Mac