The idea behind TAA is if super sampling is the product of multiple samples per pixel, why not use the image data from previous frames to get those samples! So each frame the camera is moved very slightly (or more specifically the view frustum is skewed slightly, aka “jittered”) so what is being rendered is ever so slightly different, and the color values from previous frames are blended into the current one. For a static scene the resulting image can be identical to a supersampled equivalent. This is especially great for surfaces with detailed normal maps and high gloss as this usually creates a lot of aliasing that MSAA can’t fix. And it means you get subpixel anti-aliasing for rendering techniques that aren’t MSAA friendly like deferred rendering.
But games aren’t static. Things move. So you need to track how things have moved. The first step is to render out a velocity buffer. Basically how much has each object moved since the last frame. Now, instead of using the color data from the same pixel position, you can get the color data from where the object was the previous frame! Problem solved!
Except now there are new places visible in the current frame that weren’t in the previous frame because that’s where the object was, and they weren’t moving, so the velocity buffer for that doesn’t have any movement. The pixels for this newly un-occluded area are trying to blend with the same color data as the moved object. Now you have a ghosting of that object where it used to be.
So, you have to add in some logic to look at the values in the previous frame and ignore ones that are too different from the current frame. This might be based on depth, or primitive ID, or color. Either way, ghosting fixed! But now the image is aliased again because the whole point of this was to get multiple samples per pixel, and aliasing is most obvious when there is a significant change between depth, primitive ID, or color! There’s no perfect way to know if the averaged color from the previous frame is different because the jittered pixel position, or if something else is moving! So you have to pick some middle ground where you accept some ghosting to get some anti-aliasing. Best case, even if you do figure out what areas are recently un-occluded, since they weren’t in the previous frame they get no anti-aliasing.
But at least that object in motion is getting the right pixels because it’s using the velocity data, and the offset pixel position in the previous frame is of the same object, right? Nope. It may be exposing parts of the object that weren’t visible in previous frames, either due to rotation or just perspective, leading to the same problem as above. But ignoring that the object likely has some specular reflections. One of the things about specular reflections is they move as an object or the viewer does. So on a surface of a moving object, the specular highlight is most likely not be in the same relative spot it was the previous frame. Again, this leads to ghosting.
So at least the objects that aren’t moving over multiple frames are good, right? Nope. If there’s a light moving or changing in the scene, the specular highlights, shadows, and general shading are changing too. Again, creating ghosting or causing the previous frames pixels to be rejected and disabling anti-aliasing.
And this isn’t even getting into transparent objects, which either have to not be part of the TAA at all, or are guaranteed to cause ghosting in motion as the velocity buffer can only hold one value, and is useless if it’s not accurate it’s not useful, so transparent objects can’t render into it as blended velocity values don’t produce accurate results. So either your transparent effects smear like crazy, or remain aliased in places where they intersect with opaque geometry.