I’m developing a 3-D arcade game for Android. When I press any of the buttons in the UI it actually does play the sound that I’ve attached to it (in an AudioSource) but it has about 800-900 milliseconds latency. That’s way too much and worsens the UX.
I’ve attached a script to my button. In this script I’ve made a “AudioSource[ ] audioSources;” array and in the Start() function I’ve written: “audioSources = GetComponents ();”.
Then when the button is clicked, I’ve set its OnClick(); method to a public method in the Button itself. This public method has written in it: “audioSources [0].Play ();”.
Any way to lower the latency of the button’s sound? Thank you
Sounds good, doesn’t work. Couldn’t make it work. I passed it again audioSources [0] as an argument and it says cannot convert UnityEngine.AudioSource to UnityEngine.AudioClip
I am having lot of latency too. And I’m sure it’s because of Unity. There is nothing wrong with my configuration and I’ve tried everything. It’s just this engine.
To fix audio latency :
Project Setting > Audio > DSP Buffer Size > set it to Best Latency (small buffer size). As of today with this settings, it make a glitched sound on Windows build while on macOS, Android, iOS is completely fine. You might want to have larger buffer size on Windows. (at the expense of more latency)
If that is not enough you can use native methods of each platform. I just made Native Audio asset store plugins which can make a native call to both iOS and Android’s fastest native way from one central interface. Unity Asset Store - The Best Assets for Game Making
There are various ways of playing audio at native side, here’s my choice :
On iOS it uses OpenAL. It is faster than AVAudioPlayer, AudioToolbox, and SystemSound.
On Android it uses AudioTrack, I confirmed it to be faster than SoundPool and no meaningful difference from C++ side OpneSL ES of NDK.EDIT : v2.0 is now using OpenSL ES and there is a meaningful gain.
I have compiled all of my findings in here : Native Audio - Unity Plugins by Exceed7 Experiments
PS. I have used FMOD for Unity before. The best settings that I could do. In addition to setting the best file format, requires editing FMOD Unity’s source code to use very low number of buffer size. With that still the latency is just about equal to Unity’s “Best Latency” (ant the sound cracks more too due to a low buffer size)
To fix input latency :
This is much more difficult as the path that Unity receives touch from Xcode project is almost hardwired and is not meant to be replaced easily. (Unlike audio, we just left the Unity one and use our native method)
I made iOS Native Touch which can reduce this input latency. But you will lose many conveniences that Unity provides including finger ID tracking, stationary state, etc.
I develop for mac and windows, there is no delay on mac, but on windows it is, in both versions I used a bluetooth speaker, after half a day I realized that there was a delay due to the speaker, although it’s strange why it’s not on the mac.
Hello Everyone,
I currently have trouble with loading a .wav file and some animations.
It works just fine in windows (unity editor and build), but on IOS and the unity editor (Mac), upon pressing play, the sound is being played but the whole app “stops” … until at some point the audio has finished loading … I guess… and I can stop the animation/sound and press play again … and they both work fine in sync.
So I guess it is some kind of memory / buffer issue.
I though a simple async await upon loading the sound would work but it doesn’t. I have to play it … let it freeze until something is freed and the whole thing can be reset and played.
As you might tell from the words I’m using to describe this, I have no clue what I’m talking about.
Any help from anyone with some understanding and a bit of time to point me on the right track is tremendously appreciated .
I’ve never used unity on a mac. I don’t think you should have this kind of discrepancy between operating systems, so it might be a bug, or some quirk about ios that I don’t know about.
for a work around and some process of elimination:
Click on the clip in the unity editor to inspect its settings
What is the clip load settings? Just the default?
Compressed In Memory?
Encoded with Ogg/Vorbis? https://docs.unity3d.com/Manual/class-AudioClip.html
Is the file large? Will you be using it often? Is it possible that you will play multiple instances of that sound at the same time?
If you answered yes, no, no, then you might try setting the file to stream.
If you answered no, yes, yes, then you might try setting it to decompress on load, or to even use the wav file directly and not use ogg/vorbis just to see if that might be the issue.
Try to experiment with these few settings. Loadtype and Encoding, to identify what works and what doesn’t work. You can read about what each of them does. And a combination of these settings allows you to pick your priorities.
Those priorities are:
Size on disk (your project/exe) (wav has the highest, ogg is typical to reduce the size)
Size in memory (streaming has the lowest, decompress on load has the highest (it decompresses a ogg to a wav at runtime and holds it in memory for example))
Response time when play is requested (compressed has the highest as it has to decompress the entire file every time you try to play it, the others should be quick. Decompress on load should decompress on scene start, wav doesn’t need to be decompressed, and streaming should be quick regardless of compressed or not, because it doesn’t work with the whole file at once)
Scene loading time (streaming has the lowest, decompressed has the highest)
And for streaming, streaming too many things at once can start to impact your cpu, particularly on lower end devices I imagine, though if you keep the number relatively low, the impact shouldn’t be significant.
I would also be careful not to test playing a sound immediately at scene start. Wait a second or two after scene start to test how quickly a sound plays, particularly if you are synchronizing the audio with other things and the timing needs to be accurate.
Dear @Hikiko66 and all, I thought I would give a little bit more detail to my problem. Hopefully we can all learn in the process.
The audio files I intend to play aren’t available as prefabs or assets and cannot be flagged in the editor with “Preload Audio Data” or “Load in Background” flags.
The files (.wav) are generated by the user from a microphone capture, and saved in an Application.dataPath folder.
I have tried many ways to load the clip, but it doesn’t seem to load it in memory until I actually start playing it.
I think the flags mentioned earlier are deselected by default for external files.
Without “Preload Audio Data” or “Load in Background” being selected the loading happens in the main thread, and as the file can be pretty heavy (6mb) its loading stalls the main thread (freezing anything else : apparently it’s called a “frame hitch”) while being “played” for the first time.
When the “loading” or “partial playing” has finally been completed, the main thread is freed and everything else works as expected. I can then stop the sound that is obviously out of sync with the rest and replay it in sync, since everything is loaded in memory.
Any idea how I can really preload the audio and play it only when it is fully in memory ?