How did Keijiro do this? VFX + Audio waveform

So there is this Japanese guy named Keijiro who creates so many wonderful things in Unity. Particularly, the one I am interested in is his LaspVFX package.

He somehow takes the real-time audio (from your mic) and translates it into the waveform, which in turn makes the shape of the VFX graph lines resemble the shape of the actual audio wave. I do not know how to explain it, so here is the video of it:

I am a noobie, so what he does looks like magic to me and to record this video I had to literally place the mic close to the headphones themselves. I am interested in finding out how to do the same thing but from the game itself. How do I take the audio file and make my VFX graph follow the waveform of that audio file similarly to how Keijiro did it?

This person’s Github portfolio is quite impressive!

He’s made a custom wrapper of a low-latency C audio driver to reach efficiently audio samples from any device. Then he grabs an audio buffer (NativeSlice<float>, aka a low-level buffer) and then projects it to a Mesh.

A lot of good work there!

@SeventhString
Isn’t he working for you guys? He’s been around for long and he list Unity Japan as workplace on github.

Haha you’re right! I didn’t pick that up going through his code. I’m glad to be on the same ship he is! I just joined the company this year, did not have the chance to meet him yet. I’ll say hello for you guys if that ever happens.

1 Like