for a few days, I’ve been playing with optimizing Overwatch’s graphics settings to have a good balance between latency and graphics (I don’t want an ugly game but not huge latency either)
I discovered the FSR 1.0 option (render scale at 99% and sharpness at 1.00) + model detail in High, I fell in love with Overwatch again because it makes the game much more beautiful (except in the Characters gallery where it remains ugly without having to set the render scale to at least 150%)
However, all this must have had a price since on two occasions I had my character not draw or activate his skill despite my clicking or pressing the key on my keyboard
Once it took several seconds before DVa fired her pistol after an ejection
Then Ramattra didn’t transform when I pressed the button
Concerning my settings for latency, I based myself on the recommendations of Battle(non)sense:
I have a 360Hz G-Sync display. I enable G-Sync + V-Sync on Nvidia + Reflex-Boost + Reduce Buffering. My FPS are automatic as Reflex automatically limits to 326 FPS
For my graphics settings, I based myself on the Neoseeker guide:
Resolution: 1920x1080 (360Hz)
Render Scale: 99%
Graphic Quality: Low
Image Sharpening: FSR 1.0 - 1.00
Texture Quality: High
Texture Filtering Quality: x16
Local Fog Detail: Low
Dynamic Reflections: OFF
Shadow Detail: High
Model Detail: High
Effects Detail: Low
Lighting Quality: Ultra
Antialias Quality: High - SMAA Medium
Refraction Quality: High
Ambient Occlusion: Medium
Local Reflections: ON
Damage FX: Default
Here are these settings, I am regularly above 300 FPS but for some reason I can have FPS drops below 250
Does FSR 1.0 cause graphics consumption that could explain these FPS drops? Does FSR 1.0 cause latency?
I wish someone replied… I’m like you… I like low render/input, but I also like to be able to SEE what’s happening… I’m wonder how FSR does to OW in particular as well.
FSR introduces latency. If you want avoid “variance” of it you need to set certain stuff:
First anti-aliasing causes more samples which causes more usage on gpu side of things. If your gpu is your bottleneck you should look at that
Second, lower resolution rendering reduces the load on gpu but increases on the cpu. While also introduces Latency if uses fsr*.
For the best result, you should lock to a framerate that your pc can handle, often about 10-30 frames below your maximum, due if your pc peak at certain FPS will saturate and drop to the framerate you mentioned. Which means your pc can’t handle for longer periods the peak framerate and for that you get a peak followed by a drop.
Refraction affects cpu, lightning affects cpu, local reflections affects cpu, antialias affects gpu, texture filtering and texture quality affects gpu, damage fx affects cpu, ambient occlusion is an odd ball but often gpu.
To be clear here: Lowering the resolution does not increase the CPU load per frame, but by reducing the GPU workload it can increase the number of frames queued by the CPU. So the CPU load will increase, but only in the sense that it is spending less time idle, not because lower resolution actually has an increased CPU cost.
Yes, would increase the load on cpu due would increase the number of frames if your cpu wasn’t the bottleneck but your gpu was. If your cpu was the bottleneck doing that would make the drops be more agressive and gpu be even more on idle. Which would affect the max and the minimum fps achieved.
That’s a good point you highlighted.
My goal was to triage what could be his bottleneck, my guess would be cpu due the frame variance and settings tied to cpu being the ones who has highest impact.
I was trying to address the variance. Trying framerate normalization is advisable if the goal is to reduce input lag and variance of it. Which was my intent