Hi!
I’m currently going over all the pipeline steps of getting all the AOVs I would normally use in an offline setting out of Recorder and into Nuke for compositing. One area I haven’t been able to quite figure out is what type of data Recorder spits out for its motionvector AOV, so as to know how to treat it in Nuke for things like motionblur.
I am getting motion vector data in the written AOV, so it seems to work as intended when it is being rendered. The values are very low (both U and V at values below 0.0005), so I assume its expecting to have some math done to it in comp? One example would be like motionvectors from Houdini, which are in NDC (-1 to 1) space and needs to be multiplied with the resolution (r * width, g * height) to get correct vector data that Nuke can use.
It also seems to have negative values so I’m guessing its centered around 0 and does not need a UV offset when using things like Nukes VectorBlur?
I don’t know if there’s anyone there with much Nuke specific experience but any info on the hows and whats of the motionvector data would be super helpful in figuring it out on my end, or at least what to ask my comper friends when bugging them for answers
Thanks!
Hey @larsStranden ! Apologies for the delay in answering .
Your assumption is correct, the MotionVectors AOV coming from Unity is in NDC and will need to be multiplied by the width/height in Nuke before the VectorBlur.
One particularity to note is that the MotionVectors always assume a fully open shutter (360°). When comparing our result with an offline renderer (Arnold) I noticed that their values are affected by the shutter interval defined in the render settings.
If you wanted to simulate the exact result from Arnold using a 0.5 shutter (180°), then you could divide the values of the Unity MotionVector by 2.
Another small thing: Theoretically, the range should be from [-1, 1] but we currently have some clamping that prevents values greater than [-0.5, 0.5]. This sounds like data loss but in reality for an object to have motionvector values greater than 0.5 it would need to move more than 50% of the image resolution within a single frame. This would imply incredible speed and is very unlikely.
Cheers!
@cguertin thank you so much for your detailed explanation, much appreciated! And sorry for the worlds slowest reply
Coming from mainly an Arnold background, your observation is correct. Arnold defaults to a 180° shutter but has the option to change the shutter angle as well as override where in time the sampling takes place, ie where in -1 to 1 in relation to the frame the 0.5f open shutter time happens (though I have to say in all my time as a lighter I don’t think I ever changed the latter)
Good to know about the value clamp, though I definitely agree that it probably wont ever be an issue. Adding motion blur with motion vectors in comp can be tricky and unreliable even at normal movement speeds / blur amounts, so those extreme values probably would not give you especially usable results anyways is my guess.
Honestly, on almost any show or shop I’ve worked we’ve basically always rendered shots with motion blur baked in from the renderer, as it usually gives a much better and more accurate result, especially when there are multiple overlapping objects being blurred. Then we mostly ran motionvectors as an AOV for adding matching motion blur to whatever 2D elements the compositor needed to add to the rendered frame.
Thanks again, very much appreciated!
1 Like