📌 Workaround for spatial audio regression on visionOS 1.1

Hi there visionOS VR devs! A number of you have reported an issue with spatial audio where one ear is “stuck” being louder than the other, regardless of where you turn your head. We’re hoping that this can be fixed by Apple in a future update, but in the meantime there is a workaround that you can apply to the Xcode project generated by Unity in the meantime. We will also be integrating this fix into a future version of the engine.

You can find more details in the original thread on this topic. To fix this issue, simply add the following after line 141 of Classes/UnityAppController.mm (in the startUnity method).

[audioSession setIntendedSpatialExperience:AVAudioSessionSpatialExperienceBypassed options:@{} error:nil];

If you want to apply this fix “permanently,” you can also modify this file in your Unity installation, typically something like:

/Applications/Unity/Hub/Editor/2022.3.21f1/PlaybackEngines/VisionOSPlayer/Trampoline/Classes/UnityAppController.mm.

This file is copied into any future visionOS builds made by that particular version of Unity (2022.3.21f1 in this case).

For Mixed Reality, it’s a little more complex. You may not want AVAudioSessionSpatialExperienceBypassed, which, as the name implies, will bypass platform-level spatial audio. This will work fine for bounded apps, as long as you enable the Apple visionOS XR plugin under Project Settings > XR Plug-in Management and put your AudioListener on a Transform with a properly configured TrackedPoseDriver. All of that is to say, if Unity is handling spatial audio, the app needs to move the AudioListener around to match the user’s head pose.

If you are using MR without an immersive space (a.k.a. bounded mode), your app cannot access the user’s head pose. You may want to try replacing AVAudioSessionSpatialExperienceBypassed with AVAudioSessionSpatialExperienceHeadTracked in the snippet above. We will be exploring these options when we address this problem on the Unity side. It may end up being an API that we expose to C# so that you can control it dynamically… the final solution is still TBD.

1 Like

Is this fixed in Unity Issue Tracker - No audio output when using the visionOS build?

1 Like

Thanks @mtschoen this fix worked for our fully immersive app, very much appreciated!!!

1 Like

No, that was a different issue, but I think the fix for that has shipped.

2 Likes

Thanks for the clarification, it looks like apple is also changing some things around with how audio is handled.

1 Like

Does this problem persists in 1.2 Beta, has anyone tested yet?

Thanks

2 Likes

THIS HACK WORKS! WOOOO! THANKS @mtschoen!
I spent hours completely diagnosing my spatialized audio system thinking it was a Unity issue. I should have known it was Apple. Haha.

I’m gonna implement the Meta XR Audio SDK and see if that works with Apple Vision Pro. I’ll try to report back and let y’all know if that works.

I have not, but I doubt it. Apple moves slow with this stuff.
Besides, this is a super easy hack. Takes less than 5 minutes to find that file and paste that line of code… though you do have to remember to do this with each Unity update. Small price to pay when working on the cutting edge of tech, IMO.

We’re using the Resonance plugin with FMOD to get spatial audio on vision pro at the moment, but I am curious if the Meta XR plugin would also work (that would simplify cross platform setup on the FMOD side of things…)

1 Like

After some tests, it seems the Meta XR Audio SDK is NOT compatible with Apple Vision Pro. I suspect it might be getting stripped out at build time, because if I reference any Meta XR Audio stuff in my code, an error throws when building. And if I comment those lines out, it does build but audio is not affected.

Such a shame. I really like the Meta XR Audio SDK. Room acoustics and Ambisonic audio sounds great. Hopefully they somehow make it compatible with AVP in the near future.

bummer, Resonance it is i guess!

1 Like

@asimdeyaf or alternatively you can try giving PHASE from apple a try. They recently added visionOS Support in Unity.

Just keep in mind, you have to compile and sign the assemblies

1 Like

PHASE works great. Now we can use all the Spatialisation tools of Reality Kit. It’s night and day compared to no spatialisation or resonance.

Has anyone managed to make it work using real time audio generation? I can only run audio samples but I’m interested in spatialising something in real-time.

1 Like

Is Polyspatial passing all AudioSource transforms over to Apple’s HRTFs as long as we have installed TrackedPoseDriver on the head transform?

It seems PHASE in VisionOS 2.0 will support real time stream nodes for real-time audio: PHASEStreamNode | Apple Developer Documentation

That’s fantastic! Did you experience any drawbacks compared to using Unity’s audio on VisionOS?

Can you adjust pitch and volume from script for example? (I’m working with an engine sound)