Guitar Hero style game?

After browsing through unity answers and having a quick look on google there doesn’t seem to be much info on the subject (that i could see)

I wondered what sort of things I should look at if I wanted to make something like Guitar Hero or Flash Flash Revolution (a music rhythm game)

I guess the first thing would probably taking the BPM and dividing it by 60 to get the beats per second but I’m pretty stuck from there, I’ve never really touched on this side of things before

I did notice there was a package in the asset store but im skint :frowning:

If you want to create something based on an arbitrary music file (mp3, wav, ogg) you must analyse the sound spectrum with GetSpectrumData and check the volume of certain frequency ranges. Usually, the rhythm is marked by the bass guitar and the bass drum - both in a lower frequency range, about 20 to 200 Hz. You can detect peaks in this frequency range to sync something with the rhythm, for instance. You can also monitor other frequency ranges to add other events - as in the game you’ve linked at - YouTube

In some cases you must analyse the music with some anticipation in order to create objects ahead of the player that will be in sync when it reaches them (like in GuitarHero or in the referenced game). To do that, you should play and analyse the song in an object located far away from the listener, and play the music in a near object with the required delay: if you need 0.5 seconds of anticipation, for instance, start the music in the far object and 0.5 seconds after start it in the near object.

The function BandVol below calculates the volume of a given frequency range. You must call GetSpectrum to get a snapshot of the sound currently being played in the object’s AudioSource, then you can analyse several different frequency bands. This script must be attached to an object with AudioSource, and you must drag three objects (cubes, spheres, whatever) to bass, mid and treb in the Inspector. When the song is playing, the Y coordinate of each object will be proportional to the frequency range it monitors. You can extend this script to several bands, until you find the ones you want.

private var freqData: float[];
private var nSamples: int = 1024;
private var fMax: float;
     
function GetSpectrum(){
    // get spectrum: freqData[n] = vol of frequency n * fMax / nSamples
    audio.GetSpectrumData(freqData, 0, FFTWindow.BlackmanHarris); 
}

function BandVol(fLow:float, fHigh:float): float {
    fLow = Mathf.Clamp(fLow, 20, fMax); // limit low...
    fHigh = Mathf.Clamp(fHigh, fLow, fMax); // and high frequencies
    var n1: int = Mathf.Floor(fLow * nSamples / fMax);
    var n2: int = Mathf.Floor(fHigh * nSamples / fMax);
    var sum: float = 0;
    // average the volumes of frequencies fLow to fHigh
    for (var i=n1; i<=n2; i++){
        // sum the relative amplitudes at each frequency multiplied by i (to
        // compensate the typical "equal energy per octave" distribution of most sounds)
    	sum += freqData _* i;_
 *}*
 *return sum / (n2 - n1 + 1);*
*}*
*var bass: Transform;*
*var mid: Transform;*
*var treb: Transform;*
*var volume: float = 20;*
*private var yBass: float;*
*private var yMid: float;*
*private var yTreb: float;*
*function Start(){*
 *// GetSpectrumData initialization*
 *fMax = AudioSettings.outputSampleRate/2;*
 *freqData = new float[nSamples];*
 *// example initialization*
 *yBass = bass.position.y;*
 *yMid = mid.position.y;*
 *yTreb = treb.position.y;*
*}*
*function Update(){*
 *GetSpectrum();*
 _bass.position.y = yBass+volume*BandVol(20, 200);_
 _mid.position.y = yMid+volume*BandVol(300, 1800);_
 _treb.position.y = yTreb+volume*BandVol(5000, 20000);_
*}*
*
*

Don’t expect to get too much precision - like detecting exactly which note is playing, for instance. Songs generally have lots of instruments and voices sounding together, which makes a precise pitch evaluation very hard - if not impossible at all.
NOTE: I didn’t test this whole script, so please let me know if you get any errors.
EDITED: Fixed some typo errors, and included a compensation for the typical “equal energy per octave” distribution of frequencies in most sounds - the harmonics are more spaced at higher frequencies, so the average value returned by BandVol is reduced at higher frequencies.

I agree with @Brian 2, your question is too vague. If you’re asking “How do I take any piece of music and turn that into game input requirements” then I would suggest that you probably don’t, but that you should do some of:

  • Create some simple gameplay elements for the player to achieve - i.e. tap this now, tap this twice, tao and hold.
  • Make these elements generic so that they can be required anywhere
  • Create an Editor or EditorWindow that allows you to:
  • Create some meta data for each track for when elements should appear for a given track
  • Display a waveform representation of the track in question
  • Add some controls to insert these elements
  • Some playback object will read these elements
  • Present them to the player so they know what inputs to do when
  • Determine the success of each of the elements, plus any combos where multiple elements succeed within a given timeframe

You should then be able to associate meta data with any track and on playing that track the input elements would be displayed and the user should be able to attempt to meet that criteria.

That is the game side of things, or at least a starting point for an approach.

If you wish to render something dynamic to music then you want some sort of audio/music visualisation - you should look at examples for programs like WinAmp for inspiration, but basically these all work by creating some visualisation that takes things like the amplitude and rate of change and any other parameter of a waveform you can take as inputs and render something visual.

There is a question on a ‘similar’ site here on this branch of the question:

http://stackoverflow.com/questions/153712/creating-music-visualizer

From a Unity perspective this would be rendering pixels, which might be tough, so you would want to create a plugin that modifies an area of RAM representing the background area which you can then dump this into a texture. if you’re a pro user you can probably access the rendering pipeline and do this with a texture you can write to and update efficiently. This is a whole other set of questions of course…

Hopefully there are some ideas. Clearly this is off the top of my head thinking, but hopefully it’ll be a starting point. I suggest you make the question much clearer.

Thanks
Bovine

ammm hi just come to give an idea works well guitar hero
the song is divided in guitar, bass, drums and what we do not play the song bone background are all synchronized and analyzed together midi files containing the notes as it were a great example is frets on fire as well as working guitar hero 3