I’m not sure if this is a scripting question or not, but this is certainly the most responsive forum I’ve experienced.
When I want an AI to ‘hear’ a noise when the source is within NN meters and XX minimum volume, is this something that is coded on the “audio source side” of things – kind of like a CastMsgToObectsInRange() function?
Is there a way I can simplify this by creating a fake audio-listener on the AI itself?
Has anyone had any experience with this and how did you end up managing this?
Note - I thought I had run into a thread that talked about this a while back, but had no luck in finding it again.
Unity doesn’t provide you with any way of knowing that much about sound; it can’t even tell you “how loud” a sound is. This is not something you’re going to want to deal with in the manner you mentioned.
Maybe have some sort of collider that increases or decreases in size depending on how loud something is supposed to be. If an enemy is within this collider, then he responds.
I’m just throwing that out there. There may be much better solutions.
Well, I really don’t have a choice in the matter. I have to have a way for AI to react to nearby sounds. If nothing native is available, I’ll attach a function to the game object that contains the sound and broadcast to all objects in a predefined radius.
Pseudo (theoretical) code:
// attached to object that has audio source
var objArray: GameObject[];
var volumeOffset: float = 5;
function PlaySound() {
audio.Play();
var sndRadius = audio.volume*volumeOffset;
BroadcastSoundWave(sndRadius);
}
function BroadcastSoundWave (sndRadius: float) {
var i: int = 0;
var args: Array = new Array (gameObject, audio);
for (i=0; i<objArray.length; i++) {
var obj = objArray[i].transform;
var dist = Vector3.Distance(transform.position, obj.position);
if (dist < sndRadius) {
obj.BroadcastMessage("heardSound", args,
SendMessageOptions.DontRequireReceiver);
}
}
}
I hope for some descent sound/audio management in future versions though.
From a quick bit of searching, it appears that sound intensity is inversely proportional to the square of the distance from the sound source (I imagine Jessy will be able to verify that). It turns out that finding the square of the distance between two points is much more efficient than using Vector3.Distance, so you might want to use something like this:-
var hearingThreshold: float; // Depends on character's hearing ability
var sqDist: float = (transform.position - obj.position).sqrMagnitude;
var perceivedLoudness: float = audio.volume / sqDist;
if (perceivedLoudness > hearingThreshold) {
// Target can hear sound
}
I recall reading about using the sqrMagnitude for performance reasons rather than Distance. That’s a vital piece of information – I’m glad you mentioned it!
Anyway, the previous number is only a rough estimate, and it isn’t equal for all frequencies, to a human ear. In addition, if you’re not in outer space, various frequencies fall off at different rates, even without taking actual hearing into consideration. And then there are sound occluders (read: any solid object, strong wind, water (frequency-dependent!) etc., which Unity doesn’t even support yet). It gets complicated!
Personally, I’d go with something like the following. You’d use a greater value (like -.05) for falloff in a reverberant room, and a lesser value (like -2) in a room with a bunch of couches and stuff in it. It’s an extreme simplification, but should work well in practice. I’d just leave audio.volume out of it, unless you actually need to script that too, in order to change the real volume.
(if you go with the “outdoors” assumption from the previous post falloff would be approximately -.602; -2log2.)
var hearingThreshold: float; // Depends on character's hearing ability
var falloff : float; // always negative
var sqDist: float = (transform.position - obj.position).sqrMagnitude;
var perceivedLoudness: float = Mathf.Pow(sqDist, falloff);
if (perceivedLoudness > hearingThreshold) {
// Target can hear sound
}
Is there some kind of double-post rule that I am not aware of…
Jesse, thanks for the details. The equations will change dramatically when comparing human to animal. In my example, it is the animal that will respond to sound (when player moves too fast near an animal, it runs away). So I’m looking to do something that generally covers the scenario and not so much bouncing sound off of mountains or trying to broadcast in space
An Intensity = Volume/Distance (or a similar comparison) I think should be good enough for my purposes. IMO, I would leave acoustic management and distribution to the engine rather than the script. I hope UT puts some focus on this a little bit to extend realism; some realistic echos and other sound effects would be phenomenal.
There’s no reason not to use one of the things we mentioned here. They’re both extremely simple scripts.
That would be fine if you could give your AI audio listeners, but you can’t. In addition, neither the volume property nor any other has anything to do with loudness/amplitude/intensity-at-a-distance. Volume is what your waveform is multiplied by at a radius of one meter from the audio source; not too useful a number, really.
Think about this. Let’s say you want two different sound effects. They won’t be at the same volume, if you change their volume properties to be the same. You’re trying to rely on things that aren’t there. Try my version of andeeeeeeeeeeeeeeee’s script and save yourself a headache.
@Jessy If you had a room of noisy machines going on and off.
Would you add -when the machines are on- an offset to hearingThreshold or increase the falloff?
In a realistic situation, the falloff probably wouldn’t change noticeably. (This wouldn’t be true if the noisy machine were absolutely huge, and got in the way, but let’s ignore that case. ). So falloff should stay constant.
On the other hand, as you know from our own experience, you’d potentially need to make a louder sound in order to be heard over heavy machinery. That’s what hearingThreshold is for. If you think something would need to be twice as loud to be heard, then you’d multiply hearingThreshold by two. Pretty easy, as long as your falloff value is good.
This could lead to some interesting gameplay, actually. Cool idea.
Sorry Jessy – I didn’t mean to imply I wasn’t going to use your example, which I know that’s what my post seemed to suggest. I was just identifying andeee’s original post and my initial suggestion as ‘doable for my situation’ (edit: but without using volume), albeit not a truer solution (or even close) as the solution you had posted.
Audio is output of your program. It’s better not to use your output as input. You should create some kind of “footstep” event, subscrbe your AIs on that event and check if it is close enough to decide react or ignore.
Great discussion here. I am really not good at any sort of math, but was wondering if I can perform a SphereCast from my Main character’s footsteps and if it collides with an enemy , it’s assumed he heard it? Walkin or running generates a bigger sphere , while sneaking generates a small one.
Practically does this logic make sense? Will this have any impact on performance?
This logic is pretty good and probably will work good. It is also easily predictable so you can define things like ai patrolling routes keeping in mind constant ranges. This is great for level design and networking, if you plan any.
If you will use exclusive physics layer for your enemies and correct layermask for spherecast then the performance impact will be minimal.