Motion Vectors & TAA Question

So, 2 questions. First I noticed that the Internal-MotionVectors shader in unity 5.6 creates the motion vectors in screen space, that is to say that xy goes from [-1,1] to [0,1] and then the difference is found and stored into the motion vectors RT. On the other hand, looking at the HD SRP from unity on github, I see that no such conversion is done. The velocity seems to be output in NDC not screen space.

My question is, when using the Post Processing v2 stack and the TAA shader that comes with it, is it expecting motion vectors that are in screen space or NDC?

My second question is, when generating motion vectors while using TAA, the previous view projection matrix, should that be the previous jittered or unjittered view projection matrix?

Thanks in advance for any help!

They look the same to me. Both are outputting vectors in screen UV space as far as I can tell.

It depends on the implementation. You could bake the jitter into the motion vectors, but Unity does not. Unity’s motion vectors are using the unjittered projection matrices so that if you’re also using motion blur the screen isn’t constantly being blurred by a small amount. If the camera and the object is still, the velocity buffer should ideally have the velocity be 0,0, and not some small fraction of a pixel from the jitter.

So I am looking at ShaderPassVelocity.hlsl from the SRP github repo, and I see nothing that indicates a conversion from NDC to screen space. In fact, there is this segment of code in MaterialUtilities.hlsl:

float2 CalculateVelocity(float4 positionCS, float4 previousPositionCS)
{
// This test on define is required to remove warning of divide by 0 when initializing empty struct
// TODO: Add forward opaque MRT case…
#if (SHADERPASS == SHADERPASS_VELOCITY)
// Encode velocity
positionCS.xy = positionCS.xy / positionCS.w;
previousPositionCS.xy = previousPositionCS.xy / previousPositionCS.w;
float2 velocity = (positionCS.xy - previousPositionCS.xy);
#if UNITY_UV_STARTS_AT_TOP
velocity.y = -velocity.y;
#endif
return velocity;
#else
return float2(0.0, 0.0);
#endif
}

We see the velocity.y isnt 1 - velocity.y, but rather -velocity.y when flipping the y direction. That seems to me as though the space goes from [-1,1] rather than [0,1]. The fragment shader in ShaderPassVelocity.hlsl does nothing to modify the float2 after calling CalculateVelocity, and simply writes it out to the render target.

Could you elaborate on why you believe that the output is in screen space uvs? I mean, it makes sense as to why it should be in screen space, but I just dont see it happen at any point, and simply negating the velocity implies values smaller than 0. Unless the contents of that render target is being modified elsewhere after the ShaderVelocity pass, but that would be weird.

As for what you said about the motion vectors being (0,0) if the object and camera are still, that makes intuitive sense. So for the previous VP matrix, I will use the unjittered version!

Thanks for your help :smile:

I was looking at this file:

https://github.com/Unity-Technologies/ScriptableRenderPipeline/blob/master/ScriptableRenderPipeline/HDRenderPipeline/HDRP/RenderPipelineResources/CameraMotionVectors.shader

I have no idea if the HD velocity pass is used yet, so it might be a bug that it’s not in screen space yet, or that may be a change planned for the post processing stack.

Regardless of if the values are in NDC or screen space, the velocity values can still be positive or negative.
“-1.0” - “+1.0” = **-**2.0
“0.0” - “+1.0” = **-**1.0

And that’s just for objects that where on screen the previous frame. Something moving really fast might have a previous x position of -100.0 in either NDC or screen space.

Ah right, thats true, its the velocity thats having its y component negated, not the screen space position of the fragment. I confused myself and somehow figured it was the screen space coordinate that was having its y component negated.

I didnt know about that CameraMotionVectors shader, the code in that looks more or less identical to the current internal motion vectors shader, with a bit of variable renaming. Its hard for me to tell which is should be, but logic is pushing me towards screen space because I see nothing in the TAA shader itself that would imply using coordinates which arent screen space.

Well, that should clear up my questions, thanks!

I was wondering, why is it that CameraMotionVectors do not require ztesting, but object motion vectors do?
This is the case in both the old motionvectors shader and the current one in the HD SRP repo.