If anyone hasn’t heard about it, or doesn’t know what it does, have a look at this video. It gives a very comprehensive summary of the technology.
My question is, could this tech be used to speed up the frame rate of a game in general? It seems like you could get away with a game that renders single screen 30fps and squeeze out 45-60fps by modifying screenpixels of the previous frame rather than doing an entire re-render.
I initially thought no, based on my reading of Carmack’s paper, since he actually uses more GPU power in exchange for less latency, but after re-reading what you wrote, that actually might work.
T1: Render scene
T2: Use Timewarp to adjust pixels of T1 to current view transform
T3: Render scene
T4: Use Timewarp to adjust pixels of T3 to current view transform
so you only do a full render every 2nd frame, correct?
My first thought was that if it’s by Carmack then it’s going to be way over my head. But the gentleman in the video proved me wrong and actually managed to explain the concept rather well. I guess it would work for FPSs or any scenario where the camera rotates mainly around its local axis.
Second thought is “that headset still belongs in the museum of celibacy.”
Oh, didn’t get to watch the video all the way through the first time. He actually does discuss this, the main caveat is the player can’t have moved too much between the two rendered frames.
No, the two things are different.
G-Sync adjusts your refresh rate to match your render time, while this TimeWarp shows how to render a much cheaper approximate frame if you have already have a full render available in a buffer somewhere.
Also, in terms of latency, the latency for G-Sync is still the full render time, while the minimum latency for TimeWarp is only the screen-space shader which is constant and independent of scene complexity.