HotS at 4K resolution

You use DLSS at your monitor’s resolution while effectively running a lower resolution. It came out with the first 4k gaming monitors and insufficient graphic’s card capability. Still wasn’t timely and practical and does not duplicate the resolution effect. I don’t care for SSAA - it’s not the same as playing at the resolution of a higher resolution monitor. Sorry.

Highly similar to what I recall reading about DLSS. Here is a source for you:

h ttps://www.nvidia.com/en-us/geforce/news/nvidia-dlss-your-questions-answered/

Q: What is DLSS?

A: Deep Learning Super Sampling (DLSS) is an NVIDIA RTX technology that uses the power of AI to boost your frame rates in games with graphically-intensive workloads. With DLSS, gamers can use higher resolutions and settings while still maintaining solid framerates.

Q: How does DLSS work?

A: The DLSS team first extracts many aliased frames from the target game, and then for each one we generate a matching “perfect frame” using either super-sampling or accumulation rendering. These paired frames are fed to NVIDIA’s supercomputer. The supercomputer trains the DLSS model to recognize aliased inputs and generate high quality anti-aliased images that match the “perfect frame” as closely as possible. We then repeat the process, but this time we train the model to generate additional pixels rather than applying AA. This has the effect of increasing the resolution of the input. Combining both techniques enables the GPU to render the full monitor resolution at higher frame rates.

Amusingly the bold text refutes your initial claim. AA is not equivalent to resolution in effect. Pound sand. The page also talks pretty directly that DLSS was meant as a performance booster, in reference to the other poster.

Basically, there’s a good deal of mumbo jumbo as these clowns try to improve the moods of gamers. I’ve gotten more cynical with experience. I recall the early explanations of DLSS being highly similar to DSR, which I’m not sure why you used for SSAA. One would figure to keep the two SSes together. In essence, faking a resolution. If DSR was so good they wouldn’t have come up with DLSS.

Two screenshots:
One in the Resolution 1920 x 1080:

and one in the Resolution 2880 x 1620:

(user-defined Resolution of my Geforce GTX 1070)
as you can see, is the Interface scaled up.

Yes because it cannot output the higher frequency information of edges that we perceive as sharpness, instead having to output them with a slight blur function (like an anti-aliasing filter). Edges contain high frequency information which is where there is a benefit of higher resolution as this information can be displayed. Seamless filtered textures do not and that is where they can be rendered at a lower resolution and the player is unlikely to notice.

DSR has always been flawed because it increases resolution, like it says on the box. With a 2x scale factor in each axis you now have the performance of playing at 4k while playing at 1080p. Not good.

DLSS does the inverse. You now have some of the visual quality of playing at 4K on a 4K display while the GPU is internally rendering at a lower resolution so performing closer to running at that lower resolution. It takes advantage of “tensor cores” on the GPU to run the AI in parallel with traditional shaders.

Of course there is always the “what if?” question that if Nvidia spent that Tensor Core die area on normal shader execution structures maybe the card could perform better at a higher resolution to begin with. But developers can only do the best with what players have so DLSS is here to stay.

This means the game is DPI aware. Diablo III was not when Nvidia first introduced the feature.

2 Likes

the only difference I see in the game, is that the colours seem to be darker, while the standard resolution looks a bit washed out.
But I get micro stutters that give me slight motion sickness
(maybe I shouldn’t set the options on extreme when experimenting with higher resolutions :thinking:)

What is “higher frequency information” and what does that have to do with resolution rather than frame rate? By the way, edges are everywhere apparently.

No…
It came out so Nvidia could make Raytracing somewhat usable by giving you a method to gain back all the performance you lost enabling it.

You should try doing some research into the actual subjects you’re talking about.

No one is saying that.

Aliasing happens when pixels are unsure what color they should be, making images blurry, to be really short a summary.

Anti-aliasing is a term for making this sharper and more accurate. using many different methods. The most expensive form which is not used commonly is SSAA. Which is essentially rendering at a higher resolution and downscaling.

Because having a higher pixel count will allow pixels to more accurately reflect what the image you are rendering is meant to be, it reduces Aliasing.

If you have less Aliasing due to more pixels, you’ve done what anti-aliasing is used for.

If you really want a deep dive of Aliasing and Anti-Aliasing, here you go:

But to be simple… aliasing = less sharp picture. More pixels = less Aliasing to start. Anti-Aliasing = reducing Aliasing.

Having less Aliasing and reducing aliasing both do the same thing: GIVE YOU A SHARPER IMAGE.

Well, in theory. Some kinds of “cheap” anti-aliasing have some massive drawbacks by trying to use fewer resources to do anti-aliasing with.

1 Like

It probably has to do with what program you use to scale it.

I got the UI not to scale with VSR or DSR… I forget which. Sometime early-mid 2020

It has to do with aliasing.

Do you even understand what Is being discussed?

I’ve set the user defined resolution in the nVidia Control panel

Try this?

How to enable Dynamic Super Resolution in games. | NVIDIA.

1 Like

Sampling theorem.
https://en.wikipedia.org/wiki/Nyquist%E2%80%93Shannon_sampling_theorem
This applies to computer graphics because pixels are discrete samples while the 3D scene to be shown to the player is in a continuous space. This is why aliasing exists along the edges because within the space the edge can be viewed as having impulse (discontinuous) like properties which contain an infinite frequency spectrum.

Fundamentally there is a limit to bandwidth that the human eye can perceive due to its construction and own filtering. This is why a 4k display and a 1080p display will look as detailed if viewed from a sufficient distance, since your perception is losing the high frequency information.

That and so they could justify selling you tensor cores that run AI tasks that your games would otherwise never use.

Actually it is to combat aliasing artefacts caused by discrete sampling of a signal that contains frequency components above the Nyquist frequency. It does not make images sharper and if anything technically does the opposite by removing high frequency information that cannot be accurately sampled. It will remove the “step” artefacts along an angled edge, but it cannot make the edge look sharper because it is still bandwidth limited by the resolution. A higher resolution can have sharper looking edges because it has a higher bandwidth to reconstruct the discontinuity with.

1 Like

Yes, but generally speaking, people find AA images to be sharper.

You are 100% correct in proper explanation here.

I’m just using the more common language used widely.

1 Like

This is why I run HOTS and other games at 1440 and 4k despite only having a 1080 monitor. :man_shrugging:

It looks nicer than FXAA with less of a performance hit.

1 Like