Game engines have some amazing lightmapping tech and with the rise of raytracing we are probably going to see some very realistic looking games in the near future.
But could the same or similar light mapping/tracing technology be applied to the environmental audio scape of our games?
Are there solutions out there or is there anything that Unity is working on that could allow for better environmental audio?
Please define ‘better’ in the specific context of environmental audio, the solutions you’re actually aware of, and the shortcomings of those solutions.
For example, what testing have you done of NVidia VRWorks, ie the first Google hit for ‘ray tracing for audio?’ You already checked out the Unity port, right?
Ray Tracing isn’t really optimal for sound. Sound travels around corners (diffraction), it propagates though materials (transmission) etc, etc.
We use project acoustic in our game they handle these things. But it’s baked. We have rented 1024 8 core nodes in azure to bake our scenes. Small scenes can be baked in days on a 16 core Ryzen. But larger scenes needs cloud resources would take weeks otherwise. But the result is awesome