John Carmack Tweeted this in relation to Doom and Doom 2 on modern consoles
Is there a way to run the Unity input loop at higher frequencies?
I know that there is the rendering loop and the physics loop that run at different frequencies so will polling for input in the physics loop allow for higher speed inputs?
And would higher input frequencies improve games, Esports or AR/VR experiences e.g. eye/hand/head tracking rates?
Would this help solve the *FPS bullets per second problem?
*In FPS games sometimes the bullets fired per frame can go out of sync with the desired weapons bullets per second, this can give higher frames per second gamer hardware (more bullets) an advantage over lower performance gamer hardware (less bullets).
Well of these questions are very practical for once.
Unity used to only poll input in update, which is why I couldn’t do my original game. That’s bad. Someone seems to have found a work around by using ongui() to poll input, never tried. They have promised to revamped the input polling system, I don’t know where they are at now, but that’s the kind of thing that mean, probably, rewriting the engine, which it seems they are doing now.
Higher frequency of updates is definitely helping. In fact Nvidia made a video about it, coasting an experiment made by Linus tech tips previously, and sponsoring that same Linus anyway.
Does the improvement come from the video refresh rate or the input refresh rate or both?
I think a test with variable input and refresh rates ideally with motion blur to compensate for potentially lost visual information would be the only way to know for sure what improves people’s game.
If you look for scientific information on the response time of the rod and cone cells of the eye you will find 8-20ms timings thats without factoring in brain neuron recognition timings. So in theory anything above 50-120 hz will just be a blur to your eye’s light receptors.
Yes and no, motion blur isn’t magical in itself, it’s basically interpolation between frame, as such it may lack information. And that’s the key, even though the brain don’t see distinct images beyond a threshold, it can still infer information with the extra data. SO even though in reality we might see a blur, it won’t be the same as typical motion blur and would contain more data. Experiment with VR show that up to 1000hz we can “feel” difference. Because don’t forget about REM and don’t forget about angular resolution.
So the real reason is just that we have more data with finer time resolution. It work for both input (ie what we do) and output (what we see).