If unity has already rendered it, how could this reduce latency of input?
The input latency is the result of unity’s update cycle being tethered to whatever its framerate is (usually 60).
How would taking that in post, and “warping” it, going to insert anything to do with input latency?
Now, I say this, because I’m looking into what ATW is, and it DOES NOT suggest anything about input latency.
It has everything to do with “judder” (the smearing and strobing of the perceived image due to fast eye motion and head movement when so close to the display source).
What ATW does is interrupt before vsync has read the frame in the buffer, and inserts a “warped” version of the frame to adjust for the head movement. The modification is only a 2-dimensional warping of the image to make up for any rotational or positional motion… because of this it’s not as processor intensive as rendering extra frames.
… what?
OK… so lets say vsync is trying to display at 120fps, but your game is rendering at 50 to 60 fps. What happens is that vsync will just keep sampling the video buffer, and if it happens to grab it 2 or 3 times between the time that the engine actually renders a frame and places it in the queue of the buffer, it just grabs the same frame 2 or 3 times.
With this a second program, utilizing a feature called ‘GPU Preemption’ modifies the frames in the buffer. Which must be supported in both hardware of the GPU and in the driver. Just before vsync (which is now at 120 fps, or even higher) comes along to grab the latest frame, the ATW process will modify the frame as it sits in the buffer, adding in adjustments based on motion detection and other parameters, so that it’s a brand new frame despite the engine not rendering anything.
So like, head motion turning left or right is supposed to make everything on screen move right or left as you turn. So if we’re looking at a cube, and we turn our head right, that cube will move left and begin to exit our sight.
Well, imagine that scene is just a flat image held in front of your face. When the ATW process knows you’re turning your head right, it takes the image in the buffer and it pans it left (in between rendered frames, on the vsync), to add in what the perceived effect AUGHT to look at, basically making a best guess at what the modified frame would have looked like if it were rendered.
Note, this CAN introduce extra tearing if done improperly. If your ATW takes to much time and only partially modifies the image before vsync snags it…
…
In the end.
- you need to have a GPU that supports GPU Preemption
- you need to be running a driver for that GPU that supports GPU Preemption
- this is completely independent of the engine in question, you’re just running a second process on the GPU at the same time as the game engine.
What you may need to know about the engine is what sort of frame buffer is being implemented by the game engine. For example if it’s double buffered, you can share this fact so as to avoid screen tearing. Otherwise, you’re really just going to want to sample information from your game logic in case those parameters impact your warping (like say your player is under water, and this impacts the speed at which the perceived image turns as the player spins in place… your ATW process is also going to need to know this).
Of course this is based completely on what little research I have done on the matter. For all I know someone may have come up with a way to perform this with out the use of ‘GPU Premption’ features of graphics cards…
BUT
This has NOTHING to do with input latency.