So Im wondering what is the best way to get an authentic footstep sound is in a video game. My current system has 2 audio sources attached to each foot (or more if there’s more feet). And I use a ray cast to check if the foot is close enough to the ground, and then I call source.PlayOneShot().
I have a manager singleton that returns an array of sounds based on a collider tag, so the sound can correspond to different surfaces (dirt, wood, sand)
However I see some problems with this, one being that, when you step on the ground the sound keeps moving with the player, because it is parented to the foot. But in reality a footstep sound does not travel with you, it stays and originates from the point of contact…so Im thinking that it would make more sense to have a pool of audio sources, and when a foot comes in contact with the ground, position the next available audio source at the contact point (where the raycast hits) and call Play() NOT PlayOneShot(). This way the sound stays where the foot actually hit.
But maybe there is a better way, I know I could use animation frames to know when feet are hitting the ground, but I have around 20 animations for characters and don’t think that would even be worth the time…
Just wondering what other people do to tackle this problem!
Yes, this is my approach. Well, pool of “footprints,” which combines audio source AND decal AND code to align decal with the surface normal AND code to reparent to the ground if the ground is a moving object, …
But you should be using animation events to decide when you have a footstep, before you bother raycasting. Instead of the physics overhead of checking for ground twice every frame, you get a nice callback for FootL or FootR, you then decide if the foot bone is close enough to the ground and what kind of ground you hit, to decide what kind of footprint.
Thanks for the reply, I appreciate your experience and knowledge.
Tell me if I understand correctly
So you have a pool of footprints, and you use animation callbacks to see if the foot could be on the ground, and then check the distance to ensure it is (at least close enough), and then position your footprint prefab at the foot. Then shoot a ray to determine the ground by Physics Material/texture/tag/object name, Then call Play(), instantiate a decal, etc.
Do you leave the footprint prefab at that position until it is needed again? Also im not familiar with animation callbacks do you have any tips for setting those up for footstep detection?
Checking the distance and getting the PhysicsMaterial is the same Raycast.
Instantiate a Footprint prefab, which has components that know everything about the contents and lifespan of a Footprint-- does it have a decal, does it have particle effects like dust or sparks, ask the system for the right sound clip(s) for the material(s) involved, how long should it live before it expires and returns to the pool, should it fade the decal according to an AnimationCurve, etc., etc., etc. You want a Footprint to be “fire and forget,” not requiring any extra management by the character that dropped it.
What the pooling system does with Footprints that are not active is up to the pooling system. If the Footprint is parented to a static part of the world (or has no parent), it’s fine that it stays there forever until the pooling system wants to recycle it. If the Footprint is parented to a moving object, then there’s a risk that it might be destroyed instead of recycled if that moving object is destroyed, so be sure to let the pooling system make new elements if its stockpile gets low.
So I have a FootprintPlacer component, and currently it handles everything. It has a footprint object property, which takes a prefab, but it creates its own pool of audio sources and particles and decals, based on the prefab.
The alternative would be to create a pool of footprints, which seems cleaner, but, with that system it would be difficult to change little things like just the particle system…
My current system is very versatile, and only creates the exact number of audio sources needed/decals/particles.
So i guess Im wondering is it better to have a pool of footprints, that handle certain things?
Or is it better to just have a manager, that creates its own pools of decals, audio sources, particles?
Only you can decide where versatility vs design efficiency vs runtime efficiency wins out.
I personally have a pooling system which pools multiple prefab types, but most other pooling setups I have seen will need a separate pool instance for each prefab it can create, so you need to manage your pools which manage your instances. It all comes down to preferences and “moving deck chairs” after a while, but make your decision with profiler data in hand, and think about the time you’re spending making the perfect system vs the game that’s selling.
Hey so im using the standard third person controller from unity for prototyping, so I had to go through and duplicate every animation to make it not readonly. (Is that the only way?)
Also do I have to setup the same events for every rig? Or Can I use the same script on all of them, and have the event call the same function? It seems like its per game object when I go to set events.
For instance, for a walk animation, if multiple humanoid rigs use that, will the animation event work on all of them? Do they need to share the same animator controller? Or just have the same script/function? I tried looking for this info and could not find it, and I dont want to setup events for every single one of these animations and do it the wrong way.
There are a LOT of animation packs available which provide individual .anim files, instead of .fbx which treats the animation as a sub-asset. You can edit the events in a .anim, but not in a .fbx because it’s a bundle of more than one asset. It’s kind of like how you have to extract a document from a zip archive file in order to edit it.
Unless you get fancy with arguments, events are just a string. Any class you write and attach to your character can respond to them with functions that have names matching the string event name. So you add a FootL event to your animation, and Unity will call all of the functions called FootL() on the object that is being animated (and warn you if no such function was found). That naming scheme is transferrable to all rigs, since it is just looking for functions by name.
If you are trying to trigger event calls in the Animator state machine, then yes, those are more specifically bound to a given behaviour and function in your animated object. I find those cumbersome to set up but you have a lot more control over what is getting called.
So how do you handle the fact that blended animations also fire event callbacks, do you just filter them out by only using the animation with the most weight? How do you even check that?
Your blended foot animations should all be the same approximate length, and synchronized so they all start in the same point on the cycle, so the right foot falls in all of them at about the same time. Then you can just anti-spam the event in the event handler, refusing to make a second footprint within 2 frames or 0.01 seconds or so.