TAA+SMAA?

I’v tried SMAA recently as opposed to TAA and noticed that the former has some quite impressive benefits over the TAA. Still in certain cases SMAA falls short while TAA shines.

I wonder why we cannot use TAA followed by SMAA? I more or less understand how TAA wors, while knowing little about SMAA, still from what I’ve read SMAA is just an image-based approach I cannot see why the two cannot work together. Am I missing something?

P.S. The reason why I assume these two cannot work together is because in the PP profile you can only choose one or the other and there can only be one PP profile on a given camera.

There’s no real reason you couldn’t use TAA followed by SMAA. Just generally it’s unnecessary, and TAA tends to produce rather blurry results that SMAA may just make even blurrier. Generally the expectation is that motion blur will hide the cases where TAA fails.

Curiously, the original version of SMAA as implemented by Crytek had multiple modes, one of which, SMAA T2x, was one of the first implementations of temporal anti aliasing ever shipped in a game. The more recent version of SMAA, called Filmic SMAA, is basically TAA with SMAA applied to areas where TAA doesn’t have enough information to be effective.

Unity’s TAA just in general doesn’t seem like it’s as good of an implementation as other engines have. I focus on VR so I generally stick to MSAA for all things. I haven’t spent time looking at if the settings can be tweaked to improve the results more.

Did you ever play with Livenda’s CTAA? I wounder if it is actually better than Unity’s TAA?

I’ve looked at their demos. I’ll admit it is impressive from a stability and sharpness standpoint on scenes with no or slow movements. Any significant motion seemed to essential disable the AA effect more noticably than Unity’s TAA, but there are plenty of situations where Unity’s TAA will some how completely miss an aliased edge which Playdead’s older implementation will catch.

Ultimately I hate TAA, but I think it’s an obvious choice to use for today’s games and gamers whom are used to watching game play footage on twitch or YouTube (the most obvious artifacts of TAA, the streaking behind moving objects, happens with video compression too). But it has many huge limitations that it cannot handle properly. Moving objects, transparent effects and surfaces, moving or flashing lights, reflections, or just turning the camera all break TAA.

Hey, how come reflections or moving/flashing lights break the TAA? I thought one of the advantages of TAA is the fact that it helps smooth out the specular highlights (at least that’s what it did for Deus Ex MD).

The idea behind TAA is if super sampling is the product of multiple samples per pixel, why not use the image data from previous frames to get those samples! So each frame the camera is moved very slightly (or more specifically the view frustum is skewed slightly, aka “jittered”) so what is being rendered is ever so slightly different, and the color values from previous frames are blended into the current one. For a static scene the resulting image can be identical to a supersampled equivalent. This is especially great for surfaces with detailed normal maps and high gloss as this usually creates a lot of aliasing that MSAA can’t fix. And it means you get subpixel anti-aliasing for rendering techniques that aren’t MSAA friendly like deferred rendering.

But games aren’t static. Things move. So you need to track how things have moved. The first step is to render out a velocity buffer. Basically how much has each object moved since the last frame. Now, instead of using the color data from the same pixel position, you can get the color data from where the object was the previous frame! Problem solved!

Except now there are new places visible in the current frame that weren’t in the previous frame because that’s where the object was, and they weren’t moving, so the velocity buffer for that doesn’t have any movement. The pixels for this newly un-occluded area are trying to blend with the same color data as the moved object. Now you have a ghosting of that object where it used to be.

So, you have to add in some logic to look at the values in the previous frame and ignore ones that are too different from the current frame. This might be based on depth, or primitive ID, or color. Either way, ghosting fixed! But now the image is aliased again because the whole point of this was to get multiple samples per pixel, and aliasing is most obvious when there is a significant change between depth, primitive ID, or color! There’s no perfect way to know if the averaged color from the previous frame is different because the jittered pixel position, or if something else is moving! So you have to pick some middle ground where you accept some ghosting to get some anti-aliasing. Best case, even if you do figure out what areas are recently un-occluded, since they weren’t in the previous frame they get no anti-aliasing.

But at least that object in motion is getting the right pixels because it’s using the velocity data, and the offset pixel position in the previous frame is of the same object, right? Nope. It may be exposing parts of the object that weren’t visible in previous frames, either due to rotation or just perspective, leading to the same problem as above. But ignoring that the object likely has some specular reflections. One of the things about specular reflections is they move as an object or the viewer does. So on a surface of a moving object, the specular highlight is most likely not be in the same relative spot it was the previous frame. Again, this leads to ghosting.

So at least the objects that aren’t moving over multiple frames are good, right? Nope. If there’s a light moving or changing in the scene, the specular highlights, shadows, and general shading are changing too. Again, creating ghosting or causing the previous frames pixels to be rejected and disabling anti-aliasing.

And this isn’t even getting into transparent objects, which either have to not be part of the TAA at all, or are guaranteed to cause ghosting in motion as the velocity buffer can only hold one value, and is useless if it’s not accurate it’s not useful, so transparent objects can’t render into it as blended velocity values don’t produce accurate results. So either your transparent effects smear like crazy, or remain aliased in places where they intersect with opaque geometry.

11 Likes

Just wanted to thank you for your incredibly insightful and detailed comments. A rare person on the internet takes their time to write such high quality answers. Thank you for that!

1 Like