Steam Audio simple realtime transmission setup not working

For a VR game I am looking to use Steam Audio. But it seems that with the basic realtime setup the transmission sound is ignoring most of the sound path.

The Capsule is an audio source, the audio listener is on my HMD. The black walls let all audio pass. The white floor and wall stop all audio. (settings below)

Without the black walls there is no sound at the other side of the white wall (as expected). But with the black walls enabled there is now sound on the right side of the black wall. The only way for a sound ray to get there would be through the white wall… therefore it should have been stopped.

What is wrong with my setup?

(Unity 20.22.2.1f1; SteamAudio 4.1.4; … URP; SteamVR via OpenXR, Valve Index)

Settings (sceenshots):

Would you mind sharing the code around your raycasting?

Thanks for replying, the code is all Steam Audio, I did not alter the raycasting. I think most of the calculations are done in the .dll, but the link to Unity c# code is on their Github (https://github.com/ValveSoftware/steam-audio )

Maybe this is more a Steam Audio than a Unity issue (I also asked on steam forum). I have not used spatializes until now. But since the use case is very simple I was hoping there was just a setting I forgot to set.

The only thing I can change from ‘raycast’ is set Occlusion type to volumetric which will spread the hearing area around so coming around a corner will allow direct audio to fade in more. Same result.

8918948--1221731--upload_2023-3-31_21-39-46.png

For completeness, I have a steam Audio Listener on the HMD.
8918948--1221725--upload_2023-3-31_21-32-35.png

And the steam Audio Settings should for this case only have Max Occlusion Samples as influencing the transmission. The rest is about realtime reflections and baking.

8918948--1221728--upload_2023-3-31_21-35-37.png

I did not dive too deeply in their code yet, but it seems like they have some raycasting function for “first hit” or “any hit” on objects on their audio layer mask. It may be that the occlusion algo uses only the first collider it hits in this mode, if that’s the case it would only compute the sound effect based on the first black wall, regardless of what’s behind.

But since you said that the volumetric settings has the same result, I must say I am as puzzled as you are…

I think these raycasts are only to setup the scene, not directly used for audio processing.

For now I will make all my walls non-transmitting with the material, and keep an eye out for not placing transmitting sound colluders near each other. It just seems like a simple math feature to include multiple object transmission, compared to the complicated stuff it does do.