Hello all!
I’m currently looking around to find a way of using an audio file to affect something other than bones.
From what I have seen around, there are plenty of example given for lip synch with audio.GetSpectrumData (for both people using a mic or a sound file), but since I haven’t really ever worked with audio files in Unity (other than playing in simple ways) and I’m not the brightest when it comes to sounds, I’m still lost.
This is what I’m looking to achieve :
I’m currently making some props which are textured with substances materials. I would like to make uses of exposed values (floats) of those substances to be adjusted based on played sounds. So, my dilemma is understanding how to extract the Spectrum data so that I end up with, for example, float values that goes between 0 and 100 (could be any values actually, but 0-100 being easy to manage in many ways).
I know that since I’m not giving some code to start with, some will think I’m just trying to get some free code, but I’m not. I’m just trying to understand how to proceed above getting results. That way, I’ll be able to use it for some other things in the way I want. But, for now, I’m just looking on how to get some float data out of a audio file.
Also…
Out of curiosity, how many floats (or layers) can we get out of audio.GetSpectrumData?
(I’m totally clueless about how and what that kind of data it looks like)
Thanks if anyone give me some chance to understand better. (Or even better if someone can bring me to some tutorial that explain clearly how to proceed and what each step does in the process.) I’m pretty sure this could be interesting for many people who might not have thought about it.