Yesterday AMD dropped more details on FidelityFX Super Resolution (FSR), which is AMD’s response to Nvidia DLSS. It is due for release on the 22nd. While it is unknown yet how good the quality is, the big surprise in their announcement was that it will support both AMD and Nvidia GPU’s. FSR will support GTX cards going back to the 10 series.
With good cross hardware support, it would make sense for a game engine like Unity to prioritize support for it. Unity is supposed to have HDRP support for DLSS in 2021.2, but I’m wondering if Unity has any plans yet for FSR.
I seriously doubt it’s as good as DLSS since the latter has custom hardware it can utilize made for it.
Nonetheless, it’s really cool that we now have an alternative that works on ‘almost’ all GPUs used nowadays.
I for one can’t use DLSS
Unity took it’s time implementing DLSS, but hopefully FSR gets support as well.
Maybe if the other engine did it first again, they’ll cave in eventually.
To be fair though, I feel like it’ll be a pain for them having to juggle these two.
I am very disappointed in AMD. I would have thought they would learn from the mistake of NVIDIA when they launched DLSS 1.0 and it was a blurry smeary mess. Yet here we are with a solution from AMD that is a blurry smeary mess. I would not want to turn this on for my games. Unity’s dynamic resolution looks better than this.
I’ll reserve my opinion when I can see it on my own screen or see better screenshots. We shouldn’t judge videos. Screenshots I’ve found have motion blur in it, so it’s really hard to say how good it will be.
You can do that if you wish but just be aware that the technique they’re using is identical to the technique used by NVIDIA for DLSS 1.0. NVIDIA switched away from that technique because it couldn’t produce acceptable results and more often than not introduced artifacts into the final result. Thus they deemed it a failure.
Spatial’s main advantage is that it can be applied after everything has been rendered whereas a temporal solution needs to be built into the game by the developer. We’ll have to wait and see if AMD can produce better results but seeing how their drivers have been for years I’m not holding my breath.
Oh dear, okay. DLSS 1.0 wasn’t any good. Have you found more technical details? I mean, there has to be some difference. Using an outdated technique would be pretty weird.
What I’ve mentioned above is all we currently know. Once the source is revealed we’ll know more. I’m just hoping that it being open source means it will have better quality but again I’m not holding my breath here. That image I presented in an earlier post? That’s the quality setting.
Looking at it again there is a bit of motion blur on the sides but I can’t see any in the middle. Check the pillar. It’s texture is very blurred compared to the non-upscaled pillar.
AMD’s technique has to be different than DLSS 1.0. With DLSS 1.0 they would pre-train a neural network against super sampled images on a per game basis. At runtime the neural network would run on the tensor cores to do the upscaling based on what it was “taught” a good upscaled image should look like from its training on that game. This is why DLSS has been restricted to only Nvidia cards with tensor cores. (and also why they named it Deep Learning Super Sampling)
AMD FSR though doesn’t require tensor cores, and says it doesn’t require per game training.
No. DLSS is not restricted to tensor cores in the sense that it cannot run without them. DLSS is restricted due to a combination of performance considerations and to sell cards. Tensor cores are not special in the sense you’re making them out to be. All they do is multiple two 4x4 FP16 matrices then add a third matrix to the result.
Tensor uses a fused multiply-add which allows this to be a single operation. CUDA would take several steps to do the same task which is why the dedicated hardware exists.
Looks like FSR is actually pretty good. Much better than DLSS 1.0. FSR is pretty competitive at high resolutions, but DLSS 2.0 is noticeably superior at low resolutions. Seems to be a good showing for an FSR 1.0 initial release, and I notice Unity is on the list of studios working on integrating it.
Eh, the performance and balanced profiles look like ass (performance especially literally looks like bilinear upsampling plus unsharp mask). The other two look a bit better. I would say it’s a bit better than DLSS 1.0, because it doesn’t super smear, but I don’t think final image is that impressive.
Maybe with adaptive resolutions rendering areas of more complexity and proximity could be rendered at a higher resolution e.g. centre of screen in FPS?
FSR is similar but with sharpening and factoring motion vectors, but honestly without the zoom 3x for comparison, you probably can’t tell for most games if it is on or not. If you didn’t have a comparison shot you would never actually know.
I was playing a shooter game from around 2015 last night. I was fiddling with the graphics settings a bit just cause, and I realized that the game felt and looked better if i turned the resolution down to like 1280x720, rather than 1920x1080.
What I realized is that the eye hardly notices the resolution decrease, and in fact the slight bluriness actually helps unify the graphics, hiding a lot of those hard CG lines and bad transitions. Materials blend better together, and the whole thing gets a sort of charm to it, whereas with too much fidelity it just felt like slightly dated realistic graphics.
Anyway, my point is, i think a little bit lack of crispness can be a good thing.