Asynchronous Time Warp in Unity

I am needing to implement Asynchronous Time Warp in Unity.

Has anyone done this, or done any thinking on it? Care to share anything?

As I understand you want the Asynchronous Timewarp to be done entirely on a separate thread from unity’s logic and unity’s renderer. Is there any way to do this through a low level Unity plugin of some sort? Or do you basically need to encapsulate Unity in an entirely separate process and intercept it’s output?

Please define what an “Asynchronous Time Warp” even is.

Also, why does your 3rd paragraph flip who is being spoken to?

@lordofduct it appears to be a “VR” thing

http://xinreality.com/wiki/Timewarp#Asynchronous_Timewarp_.28ATW.29

edit:
A few unity specific references here too https://developer.oculus.com/documentation/mobilesdk/latest/concepts/mobile-timewarp-overview/

I don’t know?

You mean why OP is using “you”?

The Unity API is not thread safe. Yet, I don’t think an extra thread is necessary to implement this technique, I can’t see why it would not be possible to do everything from the main thread. The important point is that you would need a fine low level control about when a frame is drawn to the screen and being able to skip frame. Maybe that can be achieved by hacking around Camera.Render.

There is no way it can run on the main unity thread.

If unity is rendering at 60 fps. Asynchronous time warp requires a stable 120 fps. The asynchronous time warp thread must run faster and more stable the the unity rendering, and game loop thread.

What do you mean? If Unity renders at 60 fps, the frame rate is 60 fps. What 120 fps refers to?

120 fps would be what the asynchronous time warp thread would run at

The idea goes like this.

  1. Unity renders frame buffer at 60 fps
  2. Asynchronous timewarp, a separate process, takes that frame buffer, does the time warp. Upping to 120 fps, and reducing latency in relation to input.
  3. Asynchronous timewarp process then outputs the higher frame rate to the actual display.

If unity has already rendered it, how could this reduce latency of input?

The input latency is the result of unity’s update cycle being tethered to whatever its framerate is (usually 60).

How would taking that in post, and “warping” it, going to insert anything to do with input latency?

Now, I say this, because I’m looking into what ATW is, and it DOES NOT suggest anything about input latency.

It has everything to do with “judder” (the smearing and strobing of the perceived image due to fast eye motion and head movement when so close to the display source).

What ATW does is interrupt before vsync has read the frame in the buffer, and inserts a “warped” version of the frame to adjust for the head movement. The modification is only a 2-dimensional warping of the image to make up for any rotational or positional motion… because of this it’s not as processor intensive as rendering extra frames.

… what?

OK… so lets say vsync is trying to display at 120fps, but your game is rendering at 50 to 60 fps. What happens is that vsync will just keep sampling the video buffer, and if it happens to grab it 2 or 3 times between the time that the engine actually renders a frame and places it in the queue of the buffer, it just grabs the same frame 2 or 3 times.

With this a second program, utilizing a feature called ‘GPU Preemption’ modifies the frames in the buffer. Which must be supported in both hardware of the GPU and in the driver. Just before vsync (which is now at 120 fps, or even higher) comes along to grab the latest frame, the ATW process will modify the frame as it sits in the buffer, adding in adjustments based on motion detection and other parameters, so that it’s a brand new frame despite the engine not rendering anything.

So like, head motion turning left or right is supposed to make everything on screen move right or left as you turn. So if we’re looking at a cube, and we turn our head right, that cube will move left and begin to exit our sight.

Well, imagine that scene is just a flat image held in front of your face. When the ATW process knows you’re turning your head right, it takes the image in the buffer and it pans it left (in between rendered frames, on the vsync), to add in what the perceived effect AUGHT to look at, basically making a best guess at what the modified frame would have looked like if it were rendered.

Note, this CAN introduce extra tearing if done improperly. If your ATW takes to much time and only partially modifies the image before vsync snags it…

In the end.

  1. you need to have a GPU that supports GPU Preemption
  2. you need to be running a driver for that GPU that supports GPU Preemption
  3. this is completely independent of the engine in question, you’re just running a second process on the GPU at the same time as the game engine.

What you may need to know about the engine is what sort of frame buffer is being implemented by the game engine. For example if it’s double buffered, you can share this fact so as to avoid screen tearing. Otherwise, you’re really just going to want to sample information from your game logic in case those parameters impact your warping (like say your player is under water, and this impacts the speed at which the perceived image turns as the player spins in place… your ATW process is also going to need to know this).

Of course this is based completely on what little research I have done on the matter. For all I know someone may have come up with a way to perform this with out the use of ‘GPU Premption’ features of graphics cards…

BUT

This has NOTHING to do with input latency.

@techmage
If I understand you correctly, 60 frames would be actual rendering of the 3d scene and the 60 other frames would be time warped.

Still, I don’t see why that would not be possible in the main thread. You could do it this way:
Render the 3d scene in one frame, then use the previous frame to make a time warped frame, then render the 3d scene,…, and so on… alternating between “real” frame and time warped frame. Why would that not be possible from the main thread?

I guess since time warped frames are cheaper to render, you would have more time to do other processing, including processing the input. That would decrease the latency. It’s the same reason why ATW increases the frame rate… which is somewhat misleading yet technically correct.

Time Warp should be something that the Oculus SDK does without us doing anything. You can set some settings, but the whole point of Time Warp is that the Oculus can guess at inbetween frames without the engine having to do anything. (Given hardware, OS, and drivers requirements are met.)

From https://developer.oculus.com/blog/asynchronous-timewarp-on-oculus-rift/
Emphasis mine.

My question is, are you using an older SDK? Or attempting to implement this for other headsets? @lordofduct seems to know more than me about it. You might be able bypass the standard Unity camera-view stuff but at that point I feel like you’re pretty much on the way to writing your own engine.

(This is reeaaally offtopic, I want to mention that frame rate is directly linked to input latency because it takes minimum two frames for the player to react and then see the results of their input, not counting timewarped frames since they are only repositioned and don’t contain any new data.)

[quote=Garth Smith, post: 2675624, member: 103675]
My question is, are you using an older SDK?
[/quote]According to this post, Oculus SDK v1.3 is integrated in Unity since 5.3.4p5 and Unity 5.4.0b16.

1 Like

Not really… the game still has to render its frames regardless of ATW or not.

Actually because ATW does cost something, you’d actually be reducing the effective framerate, while increasing the simulated (warped) framerate.

AT BEST it would have 0 impact on Unity’s rendering framerate (the update cycle essentially), at worst it’d slow it down because it’s stealing GPU process power from Unity.

So no effective increase in input latency.

@lordofduct
It does increase frame rate and decrease latency. See full explaination: http://xinreality.com/wiki/Timewarp

Eh, that’s oversimplifying it a little bit. It increases frame rate, but the additional frames do not have any new data for the player. I can take a single still image and shift it 90 times a second and get 90 fps, but that’s not playable as a game. Exact same thing for the latency. People may see some kind of updated image quicker. It’s additional data for the eye to make a game look smoother, however there’s no new gameplay data in the image.

From https://developer.oculus.com/blog/asynchronous-timewarp-on-oculus-rift/

Reading more, the whole reason for running on a separate thead is so Time Warp can kick in 2ms before a VSync. If it was running on a separate thread, it wouldn’t be able to guarantee activation right before VSync.

There is a bit of a cost on the GPU. Mostly laying out some triangles to match the warped image the Oculus gets, then doing some transform to rotate the image to compensate for head rotation. (in other words, not that much work on the GPU) Oculus gives itself 2ms on a separate thread to do this.

If we’re talking about input latency, (different than photon-to-eye time!) then this won’t make a game’s input more responsive. It’s not the # of frames the player sees, it’s how fast we can send them updated information about the game. A timewarped frame has zero new gameplay information. It’s old information transformed to look a little nicer.

The best we should be able to do input-wise is 90 frames a second to match the refresh rate of the display. Any timewarped frames will be LESS GAMEPLAY DATA going to the player, so the player needs to wait longer for a non-timewarped frame to see & react.

If we’re turning off VSync (which has a whole host of other issues in VR) then we aren’t using timewarp at all since we’re just sending frames to the display as fast as possible. (Eh, there might be a way to combine these… but at this point you really need to get the actual game frame rate up…)

That is image latency…

not input latency.

Unity is still only polling inputs and processing them in the ‘Update’ loop of your game at the 60fps (or whatever it renders at).

[quote=“techmage, post:9, topic: 630215, username:techmage”]
120 fps would be what the asynchronous time warp thread would run at

The idea goes like this.

  1. Unity renders frame buffer at 60 fps
  2. Asynchronous timewarp, a separate process, takes that frame buffer, does the time warp. Upping to 120 fps, and reducing latency in relation to input.
  3. Asynchronous timewarp process then outputs the higher frame rate to the actual display.
    [/quote]So I’ve done a bit of reading now, and it looks like Oculus runs on a separate thread and activates itself 2 ms before VSync to perform the time warping. You can probably get a similar result by running at double the framerate but you would be throwing half your frames away, and your frames will probably be more out of date than 2 ms.

There is also something about tracking head movement over the past 20ms or something then projecting it into the future to figure out how much the timewarped image needs to be adjusted.

It looks like Oculus worked with nVidia, AMD, Microsoft, etc… to actually update Windows and graphics drivers to achieve preempting vsync. I don’t know enough about the graphics pipeline to figure out how to preempt vsync by a couple milliseconds. If you find out please share your solution! =p

Are we talking about the same thing? I don’t know what you mean. Did you read this?
The latency is the elapsed time between when the head position data is acquired and when the frame is rendered. That latency is decreased.

Different things. There are many types of latency. @techmage had a comment that mentioned input. Input latency being time from the player pressing a button to displaying the results on screen.

1 Like