If you want accurate scheduling, you usually want to schedule ahead, which is common to all FMOD based audio engines. Rather than wait until it’s time, once you know you need to play the next sound, compute the delay and then play the sound with that delay specified. That should give you very accurate results.
You don’t need fixed update, scheduling the audio to play at exactly the right time in the future is already a way of getting away from frame dependency. It also doesn’t restrict you to playing audio only at “regular intervals”, although you can obviously set it up to do so.
A simple example:
If you schedule the audio to play 0.5 seconds after you schedule it, then the timing will be accurate unless you drop belong 2 frames per second, at which point you would not be able to schedule the audio by the time it should have already played, because a frame is taking more than 0.5 seconds to run. (depending on the load settings of an audio clip, it can also take a little bit of time to initialize and be ready to play)
audio.PlayScheduled(AudioSettings.dspTime);
You’re not scheduling audio to play in the future, you’re scheduling it to play now, which is frame dependent,
and exactly what Play() does, which is not what you want.
But what if the events which trigger the playback of audio occur in the fixed update loop?
Isn’t that just increasing the latency further if I schedule the audio to play AFTER the fixed update event which I would like to be concurrent with the onset of the audio?
We can probably provide more useful advice if we knew more about what you are trying to accomplish.
In general, scheduling is good for chained sounds like dynamic play by play in a sports game. If you need to play audio based on player input, perhaps syncd to music, that’s very hard to do. Games like guitar hero cheat a lot, usually playing right sounds even as you screw up and only turning right sounds off after you’ve screwed up for a while.
I would like to synchronize events which take place in the physics engine with corresponding audio events. The events may take place as little as 1 ms apart where:
Time.fixedDeltaTime= 0.001f;
From what you guys are suggesting, the only way this can be achieved is to predict when the interactions will take place in the future and schedule the audio accordingly.
This means that I am either going to have to predict the physics using a simplified deterministic simulation or delay the drawing of the screens somehow. I can’t store that many screens in a buffer, so deterministic is the only way forward.
I’m the author of the “Truth about FixedUpdate” post.
I’ve written a few posts on timing and synching already on the G-Audio support thread, here’s the gist of it:
You do not need to synchronize more accurately than frame rate( would be useless ), and cannot more accurately than the audio thread’s update frequency ( audio buffer size / outputSamplerate ). There’s only one way to lower latency, and that’s to use smaller audio buffers, which will increase overhead but should be fine on modern machines. Use SetDspBufferSize to achieve that. Try powers of 2, not lower than 128. I personally use 512 when I want low latency on iOS.
Also, don’t forget that synching does not need to be that accurate: what you hear happening 3m away from you takes
10 ms to reach your ears, and you’re not shocked by the lack of sync!
For musical use cases, it may be useful to synchronize playback of multiple sounds with sample level accuracy. This can be done with PlayScheduled. My framework, GAudio, also enables sample accurate, sub frame rate fading( as well as lots more! ).
You could also look at using a plugin to give you access to fmod and it lower latency drivers. I was able to get it working with asio and asio4all on windows and get <5ms accuracy for music stuff. Midi input from external controllers to trigger drums sounds live. Maybe overkill for your purposes.