Nvidia recently made DLDSR available to users of Geforce RTX cards. This does the same thing as regular DSR, Super-sampling, or setting in-game “resolution scale” >100%, but it does it using AI algorithms using the onboard Tensor cores to dramatically increase the quality.
There is a lot of confusion going around about this now, because many have the idea that DLDSR can improve performance over regular DSR / >100% Resolution scale. In reality, for the same resolution scale / DSR factor, performance when using DLDSR is no better than using regular DSR - but the quality is better.
Because the quality of DLDSR is better than regular DSR or using the in-game resolution scale, in many cases you can get away with using a lower DLDSR factor.
So for example, while 2.25x DSR and 2.25x DLDSR perform exactly the same, 2.25x DLDSR will look better. Because it looks better, you might be able to get away with a lower DLDSR factor. If you were previously using 2.25x DSR, you might be able to use 1.78x DLDSR (a lower resolution) and have it provide the same quality or better than what 2.25x DSR was able to provide. Being able to potentially use a lower DSR / DLDSR factor (aka lower render resolution) because of the increased quality of DLDSR is where potential increased performance would come from.
My results using World of Warcraft so far have been very good. World of Warcraft has it’s own built-in Resolution Scale, which I usually run at 133% on my 1440p monitor (3413x1920) which is basically the same as 1.78X DSR. I tried 1.78x DLDSR (3413x1920) instead, leaving the in-game resolution scale at 100%, and performance was about the same as what I had before but it looked better. I then tried using a DLDSR factor of 2.25x (4K) but the performance hit was too noticeable. Then I tried experimenting a bit using the game’s built-in resolution scale in conjunction with DLDSR. With DLDSR set to 2.25x (4K), I set the in-game resolution scale to 67%, which means it’s actually rendering the game at my native resolution of 1440p while still using DLDSR, and there was almost no performance hit over native 1440p while still looking a whole lot better. I’m actually really surprised that using both together worked so well and gave me the option of being able to use DLDSR at my native res thereby maintaining performance.
The top picture link is native 1440p, no DLDSR, 100% in-game resolution scale.
The bottom picture link is 2.25x DLDSR (4K) combined with 67% in-game resolution scale (bringing resolution back down to 1440P).
Just look at how much easier it is to read the nameplates, and how much more clear the rock face in the distance is. GPU temps did seem to increase, I guess because the tensor cores are actually being used.