The manual says dspTime is “… much more precise than the time obtained via the Time.time property.” but, if I read dspTime every frame I can see that it stays the exact same value for up to 3-4 frames in a row. Even if I put this code in FixedUpdate (50 fps) it still doesn’t change every frame. How can you call that precise?
Is it this bad for everyone else?
I noticed some small sync issues in my game and finally traced it to this. I was surprised when I couldn’t find anyone else talking about it.
You have to read this value in the context of the audio system using OnAudioFilterRead() which runs at the systems audio SampleRate instead of frame based. As to why Update/FixedUpdate give you the same result multiple times I can only assume that there are some race conditions affecting the dspTime in Update/FixedUpdate.
The value is the DSP time at the start of the current audio buffer. Depending on the combination of the buffer length (which seems to default to 1024 samples), the fixed timestep, and the framerate, it is completely normal for it to last longer than a single Update/FixedUpdate.
Retrieving a current DSP time it in a non-DSP thread, then using it to schedule something on the DSP thread is not a reliable thing to do. Even if you got a more “current” value, you still couldn’t do that reliably. You need to set up an absolute reference point (e.g. the DSP time you scheduled a particular sound on) that you can calculate reliable DSP times from (adding e.g. x number of beats or the length of a sound).
This is not a problem with Unity though. Synchronizing stuff on an audio thread is just tricky stuff by nature and takes a bit of learning.
I use playscheduled and generate all the audio start times myself so the music is all in perfect time.
My problem is that I use those same dsp times to sync projectiles and noticed they aren’t always evenly spaced. After a while I realized its because dspTime reads the same for several frames randomly.
I should probably write a custom time method that is synced with dspTime but uses time.time so it updates every frame? Maybe?
I’m not sure what the best solution is but thanks for the info, I appreciate it.
That wouldn’t accomplish what you wanted, you’re still mistaken about the nature of the real issue. Your problem would have been present even if you’d have got more “recent” non-repeating evenly spaced DSP time values in Update/FixedUpdate. The approach to retrieve a DSP time in a non-DSP thread, and using that value without a reliable reference point to schedule a sound in the DSP thread can by nature never be sample-accurate in any software. You’ll have to read up on why that is (audio buffers & passing values between parallel threads), and either learn to set up reliable reference points that you can use with your scheduling calculations, or write a custom audio filter since that code will actually run in the DSP thread.
You might misunderstand me. My music pieces all fit together very nicely and seem to be sample perfect.
For that I read dspTime once, before the song starts, and build a whole array of all the future measure start times using addition. Then when I PlayScheduled() a peice of the music, I use a time from that array.
My problem is that I use that same array of dspTimes Vs. current dspTime to trigger projectiles in beat. I do this inside Update(), and since dspTime doesn’t update for 5 frames in a row some times, some of my projectiles fire 5 frames later than they should.
Oh, okay. Sorry, I completely misunderstood the usage then! That makes this a lot easier! Declare a double that you use as a timer. On Update, perform one of these two actions. 1: Every frame AudioSettings.dspTime has a new value compared to the last frame, set your timer to that value. 2: Every frame AudioSettings.dspTime is a duplicate, you add Time.unscaledDeltaTime to your timer instead. Comparing against this timer should definitely improve the synchronization.
I was having the same problem understanding how this worked, and @Nifflas_1 advice really helped. If anyone else stumbles on this thread, here is a metronome based on the manual with an Event system that runs on the main thread. I believe it is sample accurate timing, if anyone sees a flaw in my logic, I would be happy to improve this code.
// The code example shows how to implement a metronome that procedurally
// generates the click sounds via the OnAudioFilterRead callback.
// While the game is paused or suspended, this time will not be updated and sounds
// playing will be paused. Therefore developers of music scheduling routines do not have
// to do any rescheduling after the app is unpaused
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
[RequireComponent(typeof(AudioSource))]
public class Metronome : MonoBehaviour
{
//Timer Events based on the beat
public delegate void Beat();
public static event Beat OnBeat;
public delegate void DownBeat();
public static event DownBeat OnDownBeat;
private double downBeatTime = 0;
private double lastDownBeatTime = 0;
private double beatTime = 0;
private double lastBeatTime = 0;
public double bpm = 140.0F;
public float gain = 0.5F;
public int signatureHi = 4;
public int signatureLo = 4;
public bool playMetronomeTick = true;
private double nextTick = 0.0F;
private float amp = 0.0F;
private float phase = 0.0F;
private double sampleRate = 0.0F;
private int accent;
private bool running = false;
void Start()
{
accent = signatureHi;
double startTick = AudioSettings.dspTime;
sampleRate = AudioSettings.outputSampleRate;
nextTick = startTick * sampleRate;
running = true;
}
private void Update()
{
if (lastBeatTime == beatTime)
{
if (lastDownBeatTime == downBeatTime)
{
if (OnDownBeat != null)
OnDownBeat();
}
else
{
if (OnBeat != null)
OnBeat();
}
}
downBeatTime = AudioSettings.dspTime;
beatTime = AudioSettings.dspTime;
}
void OnAudioFilterRead(float[] data, int channels)
{
if (!running)
return;
double samplesPerTick = sampleRate * 60.0F / bpm * 4.0F / signatureLo;
double sample = AudioSettings.dspTime * sampleRate;
int dataLen = data.Length / channels;
int n = 0;
while (n < dataLen)
{
float x = gain * amp * Mathf.Sin(phase);
int i = 0;
while (i < channels)
{
data[n * channels + i] += x;
i++;
}
while (sample + n >= nextTick)
{
nextTick += samplesPerTick;
if (playMetronomeTick)
amp = 1.0F;
if (++accent > signatureHi)
{
accent = 1;
if (playMetronomeTick)
amp *= 2.0F;
lastDownBeatTime = AudioSettings.dspTime;
}
lastBeatTime = AudioSettings.dspTime;
// Debug.Log("Tick: " + accent + "/" + signatureHi);
}
if (playMetronomeTick)
{
phase += amp * 0.3F;
amp *= 0.993F;
}
n++;
}
}
}
Put that script on a manager in your scene and in any other script, subscribe to the events:
I know it’s more than a year on from when you posted this, but I read through it tonight and noticed one issue: it is possible that in low Update() framerate situations you will miss your opportunity to match the lastBeatTime and lastDownBeatTime, and you won’t call the handlers.
Could be fixed by detecting the beat and downBeat states in the dsp thread, and setting two booleans for each, if they aren’t already set. In the update, fire the events if they’re set, and then unset them. This will ensure you call them at least once in the update() when it gets to them.
Alternately you could (lock and) queue them from the dsp thread, and then fire them from the update(), if you really need to track each one.
I am by no means an audio person, and have since discovered what you observed. My code is not sample accurate and does miss beats here and there.
As I understand it, OnAudioFilterRead is runninng on the dsp thread, so I guess you are suggesting the booleans would be there? Still, it seems like any kind of game trigger will have to be in Update, so it’s still not going to appear accurate in the game. I think the only true way to have true accuracy is running everything on the dsp thread, meaning that whatever we want to show as a reaction in the game (and not as audio) will never be synchronized perfectly.
You can flog the game’s main thread so hard the gameplay stutters to single figure frame rates, yet all logic in the OnAudioFilterRead thread (ThreadID 22 on my machine) continues without hiccup, and continues playing perfectly smooth audio - if it’s not reliant on anything from the Main Thread.
The only way to stutter this audio thread is to overload it with maths or other logic, or not provide it with sufficient amounts of the data it’s looking for to make sounds smoothly.
And dsp time seems to be incredibly reliably accurate, almost always being equidistant calls, to very tiny fractions of fractions of seconds.