Hello,
I have a quick question.
Should I put an AudioSource on each game object that would have play an Audio Clip or should I have a single AudioSource in my scene and just use that one AudioSource to play all my game sounds?
Hello,
I have a quick question.
Should I put an AudioSource on each game object that would have play an Audio Clip or should I have a single AudioSource in my scene and just use that one AudioSource to play all my game sounds?
The first one is how it’s intended to work.
Of course, if you don’t want 3D sound anyway (panning/volume adjusted based on relative position of a source to the listener) then you can just use one source, which could simplify things in very small scenes, but even if 3D sound isn’t used in a large scene, for organization I’d probably still use a source for each object that emits sound.
Yes, it’s just a 2D game. Thanks.
Don’t mistake 3D and 2D sounds as correlating to 3D and 2D games! 2D games make great use of 3D sounds. Don’t just set sounds to 2D because your game is 2D, or you’ll lose a lot of potential immersion.
2D sounds only use a stereo sound clip’s channels, and 3D is mixed to mono and attenuated based on the source’s distance from the audio listener. You can also blend between these.
Read the audio source documentation for more info.
Neither. The first idea will not work if you want to play “death sounds”, i.e. sounds that play the instant something gets despawned or destroyed, because the sound will stop playing when that happens. You may hear a short blip but usually nothing.
The second idea doesn’t work because you would only be able to play 1 clip at a time, unless you use “PlayOneShot” which has its own problems (you give up all control over stopping / changing volume of sounds played that way once they start playing). There are reasons audio plugins exist - they solve these problems and much more.
I don’t deny the limitation of Unity’s audio or the usefulness of such plugins (and I know the argument can be made that a developer shouldn’t have to do these things, etc.) but it’s pretty easy to work around these limitations; you can create a prefab with a PlayDestroySound class designed to be instantiated when an object is destroyed to play the sound in its place, then destroy itself when the sound is finished playing. This is possible with very little effort, I’d say.
Again, the size of the scene/project plays a big role in how the audio is designed. A mobile app with one scene can probably get away with one audio source if it isn’t overused, for example; a large game that is based largely on audio processing will probably need a little help from various utilites. Then there’s all the area in between.
Sure, although you using the words instantiate and destroy makes me think you should be using a pooling plugin as well (or something you’ve written to do that). Garbage collection is a killer! Arguably the biggest enemy of Unity developers. On mobile you want to avoid absolutely every Instantiate and Destroy call that you can.
Everyone should have a collection of things they’ve written or bought, which accumulate over time as you write or buy more, as the basis for common issues like these.