Best practices for manage all the in game sounds?

What is the best practice for managing all the in game sounds? SFX and Music, from my research Ive concluded that it seems best to have a gameobject that acts as a sound manager, with two separate audio sources for music and SFX.The music part it would make the most sense in my project to have a reference of the audio on the scene and have the audio manager look for that but that’s not what Im stumped on, what my questions are…

  • How does one handle all the sound effects for player and enemies and where do you put them?

  • Should all the player sounds be on the player game object? If you have different foot step sounds for different floor types (Wood, Metal, Cement etc…) would they also all go on the player game object?

What about enemies?

  • Should each enemy have all there sounds on their respective gameobjects even if their the same type? or do you have a container holding all the enemy sounds that you call from? If so what about enemy types that aren’t in that specific scene? would having their SFX on the scene be a waste of resources?

This is my first large scale project so Ive never had to deal with this before and I really want to know what the best practices are to keep things neat and tidy before I go any farther into the audio department.

Thanks in advance!

3 Likes

There are several ways to do this and no way is particularly better than another generally speaking. However, the way I have it structured is I have an audio manager that acts as a game-specific wrapper for MasterAudio. It contains all the ids and registrations of sound effects that are triggered from an event system. This way, all game play systems that need to call sound just call a SoundEvent with a specific ID. This serves as a level of abstraction from all those systems so they don’t have to know anything about audio, only audio-related events. Additionally, it makes it much easier to search for calls when searching by a specific ID, because that’s what will be passed into the event call.

1 Like

@aer0ace how do you handle position of the sounds in this system? Send world position along with the audio events?

1 Like

There’s this function:
https://docs.unity3d.com/ScriptReference/AudioSource.PlayClipAtPoint.html
However, it does not allow you to move the sound once it started playing.


You use whatever works for you.

That’s definitely not the way I’d you use.

First, player object can be in a separate location from camera. Meaning if sounds use positioning, having every sound in the game bound to the player will be odd, as moving camera will make it obvious that the sounds are coming from the player.

You put audio emitter on objects that emit sounds. AudioSource class.

They shouldn’t be on the player, because that will screw up 3d positioning. They should be on an object that emits sounds, at position that emits sounds.

In general, you’ll probably want audio emitter in position that plays sound, because 3d positioning matters. Past that point, you can make multiple audio source components for different sounds, OR you could use one audio source component to play different sounds and change which clip is being played. Each design is valid.
In UnityEngine, sound data is represented as an AudioClip. Sound emitter is represented as AudioSource. And listerned as AudioListener.

You put listener onto the camera, audio source at objects that emit sounds. One GameObject can have multiple audio source attached, although this can be tricky to manage, so one source per GameObject make sense. past that point.

For example, if you have an npc with one thousand voiced lines, then making a component for each would be silly. Instead you’d probably want to attach an audio source roughly at one position, and make it play different audio clips. References to those audio clips will be stored elsewhere. The reason why you put this into audiosource, is because npc can move and AudioSource tracks position of a sound being played.

Then you can have somethign like a gun, which have, for example, three sounds total to it. “Shoot”, “Try to shoot with no ammo”, “reload”. In this case you could add separate audio source for each, and they’d be playing only one sound they can, while the moment when the sound starts would be triggered by a script or animation system. This can be triggered by some sort of “eventId systeM” like the one described by @aer0ace

Then you can have an environmental object like a torch (let’s say you can’t extinguish it), which continuously playing “fire” sound effect. In this case, the torch model, along with the light source and the audio source will go into a single prefab, and once prefab is done, you won’t be touching audio source at all.

For sounds that do not actually move, like footsteps, you can actually skip audiosource entirely, and simply play footstep sounds via a script. Using AudioSource.PlaySoundAtLocation, which does not require component. This can be used for dynamic sounds like collision with different materials, and so on.

That’s the rough idea of it. Use common sense and whatever works, without looking for “the best”. However, I don’t see the poitn of putting everything onto player.

4 Likes

Yes. The id is just the very basic info you need to pass through an event. You can then add more parameters as necessary like position, volume, falloff, and whatever sound properties you want. The key is to separate the event call from the systems to avoid dependencies.

The easiest way is to simply put sounds on the objects that create them. Not sure why you would want to go down the Management Madness route.

You say you did research, what were the reasons you found in favor of doing it with two audio sources?

After many games and a lot of learning my own system essentially evolved into this:

Disclaimer: I haven’t used this plugin, but maybe I should have… could have saved me a tonne of sound system dev time.

1 Like

Yup. I integrated Master Audio thinking, why do I need this? And after using it for a while I get why it’s so popular. It makes iterative audio production a lot easier than the more simplistic “apply audio on each game object”. I am still not a fan of some of the design tenets, but I’d rather use it than not.

The right way depends on the game. A Tetris game doesn’t need 3D sound effects so would have different requirements for its audio sources than a 3D horror game where you need the sound to appear to come from a specific location in the game world. Also the size of your sound files may change how you want to reference them, as a few small music files you can just reference and load directly when the game or scene starts, but a large catalog of music you will more likely want to load from disk as needed to avoid long load times and high memory usage. Large numbers of unique sounds per level you probably want to load from disk only for that level, whereas sounds which are used across your entire game you probably want to load early in the game and keep them in memory.

Fmod is free if your budget is under $500k. When I’ve worked with it, it’s been very comfortable from a programmer’s point of view, and the sound people I work with are absolutely ecstatic about it.

2 Likes

The reason why sound people are ecstatic about it is because it allows a great deal of parametric control normally done by DAW and it can happen at runtime, and you can make adaptive music with it, variation in effects so no two sounds play the same, and so on.

However, this is advanced topic.

As much as it is an advanced topic, it is actually a completely different topic.

After reading some of the negative reviews in the Asset Store, it turns out one needs additional third-party software in order to use FMOD. I wish people who promote solutions like this would be transparent and forward about additional requirements like this (eg: You have to learn a whole new piece of software first).

Yeah, the audio people will be using the FMOD Studio software. That’s a part of the package you get when you get FMOD.

It’s integrated very well into Unity, but your audio people will be using FMOD Studio to edit the sounds of the game, yeah.

The alternative, mind you, is that your audio people have to learn Unity. So unless they’re already familiar with Unity for some reason, the barrier of entry for FMOD is way, way lower than the barrier to entry would be if there was some in-Unity system they had to use.

There’s of course tradeoffs, but every single audio professional I’ve worked with in the industry prefers FMOD to in-Unity solutions - that both goes for old-timers and people fresh out of college. Hell, one guy we used for audio was good friends with the guy who makes Fabric, and he still would prefer FMOD.

If you’re a Unity person moving into sound, or if your sound person is already familiar with Unity, all of these thing won’t be relevant at all. In those cases, rolling your own editor windows for sounds, or using something like Fabric would be a good alternative. There’s still cool features that FMOD has, but replicating them yourself is doable.

1 Like

Based on his post I suspect there are not “audio people” but a one man studio

We didn’t like that audio properties (fall off curve, reverb, etc,etc) was coupled with the actual audio source. So we created a solution with scriptable objects that you could define audio properties on then a system that pooled audio sources and when you want to play a sound you supply the system with a SO and it will play it with the correct audio properties. It can also oneshot and even be attached to a transform while it playes.

Pretty happy with how it turned out.

3 Likes