Testing a scene with little going on, I set Application.targetFrameRate to 200 and tested on a 120Hz monitor (reporting 119Hz).
Watching the stand-alone build via the profiler, I notice that sometimes, instead of being 4.99ms, I will see numbers even as high as 6.04ms. It can even be several consecutive frames over 5ms.
What is going on here? If I remove the limit it easy reaches way below 4.99ms constantly.
I’d really appreciate any feedback or ideas, especially technical explainations.
Delta time is probably not a good indicator for wether you missed any frames. You should count the frames over a longer period and then see if the frame count matches what you expect.
Also, what version of Unity are you using and which platform did you test on? There’s a recent fix that partly landed in 2020.2 that fixes inconsistencies in delta time. There’s a long blog post that goes into the details and that should also give you a better understanding of what’s going on behind the scenes.
Specifically I am looking at the “time ms” reported as the “player loop” in the profiler, when running in a stand alone build.
I am expecting that, while WaitForTargetFPS fluctuates quite a lot (as I’d expect), that the result is a solid line after everything is totalled. I get the sense that something is taking longer than is reported to the VSync/targetFrameRate, resulting in the wrong wait times for those frames.
I’ve been testing in a stand-alone build, as I am indeed aware of how much overhead the editor adds when running it there.
I am not specifically using Vulkan. I have it set to automatically choose the API, but Direct3D11 is the default for Windows 10 right?
If there is nothing wrong, then I have nothing to worry about. It concerns me that maybe there is a major issue here and it is not just the profiler showing the wrong timings.
When I get back to working on it again on Monday, I’ll try counting the frames and checking the average time. Is there anything more sophisticated I could use to check for something like the vBlank intervals?
targetFramerate uses Thread.Sleep (outside of mobile, where it’ll match that to the next lower vSync interval) to wait for the targeted Framerate, and yeah, Thread.Sleep isn’t quite that accurate. If you want it more accurate, you should use vSync and let the GPU wake up the main thread again via the vBlank callback.
vSync means I am stuck with whatever the user’s monitor is, so I can’t have 8.33ms frames becuase vSync could be going off 16.67ms (60Hz) or maybe even 6.94ms (144Hz)
targetFrame uses the inaccurate Thread.Sleep.
… How can I setup a more accurate constant frametime?
I am thinking I’ll have to put all my code under FixedUpdate and never use Update. That just does not sound quite right to me though.
The reason I am getting into this is that I want a very stable experience for my game which emulates retro consoles, specifically the NES.
I realise I will have the same frame being drawn twice with monitors that refresh faster than my target 8.33ms, but I figure there is little I can do about that. (I mean not using 29.97 * 2^N₁ such as the 144Hz refresh rate was a very dumb idea IMO).
I guess you could set targetFrameRate to -1 and insert a busy loop into the PlayerLoop at the position that would be taken up by the wait for target frame rate sleep.
Maybe a combo of thread sleep until you are within error margin of that waking up and then spinning in the busy loop. Because the busy loop will obviously not be ideal for power consumption and heat development in your chips. But depending on the hardware you’re running on, that might not be the concern here?
It has been a while since I was an 80’s kid, but I remember NES games hitting low frame rates when lots of enemies appeared on the screen as commonplace with the console.
I actually plan on using Unity’s Maximum Allowed Timestep to cause the processing of everything to wait a frame if it did not complete in time. So far this works wonderfully well.
I ran that test of frame average again, using vSync for a 120Hz monitor refresh.
The frame times were: 8.39, 8.35, 8.35, 8.34, 8.30, 8.36, 8.35, 8.35, 8.31.
As such, the average appears to be 8.34ms, which is exactly what I would expect to see (because of 1000/[29.97*(2^2)] where 29.97 is the NTSC timing). Yay!