I needed something better than the infamous (and so bad I’d rather disable AA than use it) FXAA so I ported SMAA to Unity. It’s a clean port and uses the original, up to date implementation (2.7) code. It’s fast and looks great, even when compared to MSAA 4x.
Tested with Unity 5+ (Personal or Pro). Works with the deferred & forward rendering paths, in gamma or linear color space, with Directx 9, Directx 11 and OpenGL targets.
It comes with a few quality presets but you can easily build your own in the inspector. Every inspector setting comes with a help popup so you shouldn’t have to dig into the (highly documented) source code.
Right now it implements SMAA 1x (+ predication). Implementing Temporal SMAA (T2x) should be doable, but the spatial (S2x) and spatial + temporal (4x) variants aren’t possible in Unity right now.
This asset should come as a standard feature in Unity, so it’s free, enjoy !
Help & pull requests are welcome ! I’d love if we could get a complete (temporal) implementation working so that everyone can use it instead of FXAA (FXAA must die).
And don’t forget to check out my other assets (see my signature)
To enable spatial multisampling (S2x and 4x), unfortunately yes. To put it simply you need to enable MSAA 2x, run SMAA on both subsample buffers and blend both buffers. But Unity doesn’t give access to individual MSAA buffers, so…
On the other end, T2x (temporal) should be possible. The only “problem” is that you need to generate a velocity buffer, which is generally done in vertex shaders. So unless you can find an efficient way to do it in screen space (using RenderWithShader wouldn’t help and would probably tank the performances, think drawcalls x2), it has to be injected into your vertex shaders, so it would require a modified Standard shader (and any other you may use). If anyone can think of a good way around that… I would be most grateful
The direction Unity’s headed in is terribly sub-optimal. We’re seeing vast amounts of redundant calculation that could be stuff calculated earlier or elsewhere in the chain.
For example colour correction, tone mapping, dof, antialiasing, motion blur, these are all effects that depend on resources that could be shared like velocity buffers, or blurred buffers, it’s getting kind of insane and for framerate’s sake might need standardising somehow.
It’s not important to a hobbyist but for a console dev or someone targeting lower powered hardware, it’s practically half of your frametime gone when really it should just be a single minimal impact.
You may ! I feel the exact same way. At least the _CameraDepthTexture is only generated once, so there’s that.
I’d love to have a proper and tightly integrated post-processing chain (I would adapt my assets right away to minimize the performance hit) instead of what we currently have, meaning one behaviour per effect. It’s fine for two or three effects, but it can quickly get out of control and become terribly inefficient.
Unless you’re willing to dig into the code and merge what you want yourself. Which is what I’m doing in a project of mine, I’ve modded my Chromatica asset to add tonemapping etc so I get a full color correction stack & tonemapping in a single fast draw call.
Well perhaps we can get something going. If enough clever people sit down together, clever shit happens. Essentially what we’re looking for is a series of solutions for global textures to get generated. We know we need a number of high quality blur buffers and so on.
I’m thinking much like you can define the deferred shader in Graphics options, you could configure and define textures that must be there. I’m not sure if that’s the right place for it. It could be on the camera. Failing this we could just have the best authors work together. The comes a tipping point where it’s necessary to collaborate with unity’s support. @robert or @Aras might know best.
For example @sonicether does great bloom. He generates a number of high quality smaller bloom textures. We can and should be using these everywhere in a standardised format if it’s generated already.
That still leaves the issue of combining but at least a lot of the setpass work is done for us. I’d like Unity’s input on that. Sorry for derailing your awesome thread, but its semi-relevant at least.
UE4 has the edge here, it’s somewhat locked down and optimised, and Yebis goes much further, doing a number of effects all with minimal passes.
Sorry for the dumb question, but i could not find the answer in the docs…
After a few hours, i finally managed to reference SMAA component from another script ( My GraphicOption script )…
I add to " use.Smaa" at the top of the script to then be able to cache my " public SMAA var".
What i am trying to do now, is to be able to change the quality via my GraphicOption script … I tried different syntax but none seems to be working… If any of you could point me in the right direction, that would be really appreciated…
using Smaa;
// And in your code, something like :
GetComponent<SMAA>().Quality = QualityPreset.High;
Obviously, GetComponent should be called on the correct GameObject reference and should be cached for maximum efficiency. Then you simply have to set the Quality field to a value from the QualityPreset enum (you can of course change it dynamically at runtime). If you set it to QualityPreset.Custom, the component will use the data in the CustomPreset field (of type Smaa.Preset). There you can fine tune every settings but you shouldn’t need it, the standard presets are good enough for most cases (Ultra is overkill).
If you’re working in Visual Studio or MonoDevelop, auto-complete should help a great deal