This video on Nvidias Reflex a low input latency technology, highlights the fact that to achieve low latency the CPU must be throttled to maintain a minimal GPU buffer of frames and therefore limit the input latency to the minimum number of frames.
So I was wondering if there was any way within Unity to limit/control the CPU rendering pipeline to ensure the GPU buffer only has the next frame?
Cool this sounds like it could be ideal, how does it work?
Does it ensure the CPU waits until the next frame is needed so only the most up to date frame information is generated, pausing the Update() loop for just the right amount of time?
Or does it allow the CPU to submit multiple frames but only the latest frames stay in the buffer deleting older frames?
If it stalls the Update function how would that impact input lag, as there is the potential now for longer input delays as the CPU waits for the next Update?
Could there be a way to push forward user input e.g. adding a muzzle flash to an existing render buffer when the user fires or turn/move the camera position?
Question: Why do CPUs/GPU use fixed render buffers when they could use granular distance/dynamic based buffers that would allow the latest input changes?
In theory you could have distance/dynamic render buffers/layers going to the GPU as batches of work with the most dynamic layers being last and therefore being able to have input changes added and reduced latency.
Something like this could also work great for streaming game systems.
It could also allow for reduced CPU/GPU bandwidth load as it would be delivered over time not just in one large buffer.
Has anyone done any input latency testing using this feature to see how much it can impact latency performance and if it generates issues with FPS rates or vsync?