Blizzard PLEASE add an Motion Blur option on the engine

Yes I know, but all forms of blur are important. I fully recognize that display blur is not the same as camera blur or per object motion blur as you describe. However, ignoring it in this conversation seems unwise. It is certainly relevant to the points I’m making, which are that the technologies in monitors used to reduce motion blur are doing so in order to increase the clarity of the object, because visual clarity is extremely important.

This is only true in the sense that ULMB runs at a lower refresh rate. A 240hz display running ULMB at 120hz should have no more lag than a standard 120hz display. In fact it may actually be better because the display itself may have better response times while switching more quickly.

No, I’m not saying they look the same, I’m saying that both reduce clarity of the object. Any form of blur will do so.

Increasing framerate helps because of the reduced latency. High precision input helps because the OW engine under the hood runs at a fixed 60hz tick rate, so fast motions (e.g. flick shots) may be disconnected from the interpolated visuals you see on the screen.

Because the replays are not accurate. The devs have gone on record stating that the camera movement you see in the replays is not going to be accurate to what the client saw in game.

Play in 4k

it looks like that the cursor just have multiple images appearing and disappearing with very visible gaps between then. This is called Mouse Arrow/Cursor Stepping Effect.

I dont see that. I only see mouse cursor moving.

This effect can be VERY disorienting and INDUCE MOTION SICKNESS.

I dont feel motion sickness at all when playing OW :man_shrugging:
Nor I feel disoriented. You should check your health

I like motion blur on RPG games, and singleplayer games. It feels more realistic and immersive, however in FPS games it’s just a straight fat NO.

Not exactly, blur by example creates an extra layer of dynamic content in the screen that can cause jitter if your hadware it’s operating in the limit, that’s why impacts on overall less framerates. I used an example of R9 380 and gtx 1080 in other post. While at same frame rates R9 380 (between 120-160) has way less SIM values than Gtx 1080 (120-160), while the gtx1080 can operate on higher framerates to compensate that to get an average times at 300 fps similar, a bit higher, from R9 380 operating at 160 and a bit lower with R9 380 operating at 120. Nvidia Reflex it’s a feature that dynamically try to adjust the latency to get better lows reducing the average while also you get more some moments of really bad timmings. Making the gtx 1080 operate at 300fps nearly as half the R9 380 at 120-160 fps, but with enough jitter to perform a bit worst in a few moments, making overall experience nearly the same with a margin of 20-30% towards the R9 380 overall and almost 50% than before against the 1080.

Because nvidia gpus has more driver overhead comparated to amd gpus. Their SIM timmings weren’t great without reflex when input lag was considered. Reflex it’s a technnology to trying to get similar results as Radeon Anti-Lag had, but because the implementation and already lower driver overhead the “return” was way lower than the nvidia counterpart released about 2 years later. The issue with reflex it’s about the jitter about it, but would be lower percetage to actually hurt in most of the cases. Add the appropriate gear with g-sync/free sync and you avoid some jitter by increasing a bit the overall input lag comparated with the solution without proper gear and would reduce about average 20-40% comparated without the reflex feature. I think it’s at the range of their numbers about reflex, proving my point. Later I can try to find the post I talked about it.

AMD chill it’s for reduced energy consumption locking the max framerate and the minimum framerate in certain threshold often used to have stable experience. The feature it’s Anti-Lag from june 2009 which it’s available on almost any card after 300 series, not sure about the previous ones. Radeon Chill it’s inverse in effects with anti-lag, their controls nullify the other even on Radeon software you can’t have both enabled at same time. Anti-lag only affects a minimal amount, not scalating very well comparated to reflex because their implementation differs a lot. But both have almost the same purpose.

There are if your hardware got some constraint, triple buffering helps you have smoother framerate on lower framerates. If you working below your target framerate(either monitor or some kind of desync or below than 30 fps). That’s why I said that could work really well with motion blur effect if the blur drops hugely your framerates. I’m considering about 10-30% framerate loss considering that Gtx 1050-1060 cards are popular cards could mean below 100fps experience in some scenarios, that triple buffering could solve.

True, reduce buffering it’s to remove the numbers of frames rendered. That improves performance because reduces the time between cpu and gpu, which often what bottlenecks the cpu it’s the gpu. That’s why I said in some special cases wouldn’t be needed to disable it.

Often gpu from amd works more “streamlined” way while gpus from nvidia works based on workloads, that’s why nvidia gpus have some peaks of power draw higher while amd often are more straightfoward ones. AMD has anti-lag and have less driver overhead making their input lags being overall less than nvidia counterpart.

That’s true but also quirky, because input lag doesn’t only means the “input” from user, by example on diablo3, if you have around 100fps your loading screens were severely impacted when the client had 32 bits, even these days holds true having the more fps in the game provided overall faster or near instant loading screens. Even if in the other parts of the game the benefits were “almost null”. More fps can provide overall less input lag but also could help the application load assets more faster because CPU these days have some quirkier techs about their internal timings and thresholds and having them work constantly at same “pace” overall works for better systme resposiveness. Like making every piece of hardware working in sync between themselves, that applies for every application running in the pc. That’s why sometimes if you alternate screen between aplications after some “huge performance hungry usage” the new app would receive some kind of “accelerated or fastforward” timming to compesate and then stabilize themselves. So yeah, have diminishing returns for user input lag, but not overall input lag on the OS, because the hardware has threholds where they perform on “lower power state, higher power state or turbo power state”. Games often uses turbo power state giving short bursts of performance, that’s why folks try to get higher fps to get more “higher power state” even if that means less overall max fps and lower fps.

The 99% bound territory it’s a thing in each piece of hardware, both cpu but also gpu ram and even disk. That’s why we often don’t see higher capacity disks with crazy speeds and often why disks gotten slower when most of the half their space got compromised.

That’s awesome, always nice to provide good info for everybody. Knowledge and information it’s only great when shared and more folks are enabled to get it. It’s really rare to find folks willing to talk and discuss about useful stuff that often doesn’t go to general knowledge. I hope they can find a nice way to help in the acessibility side without hurting too much performance.

But the point of this technology is not to solve stroboscopic effect, it never was. LCD blur causes blur even in extremely slow moving objects scenarios, just go to testufo and go to map moving photo at about 1000pixels/sec. A very slow 2D image moving that you can easily track with your eyes, but you just can’t read any fine detail EVEN at 240hz without any motion blur reduction.

Also, this type of blur doesn’t introduce any information at all. It just’s “holds” old information. An artificial motion blur tries to interpolate “infinite” information between two frames. If anything, LCD blur could make the stroboscopic effect even worse (but more detailed), something that an artificial motion blur tries to solve.

They’re not related at all regarding their effects and reasons to exist. If anything, motion blur reduction (on display) can make artificially motion blur even better with enough frame rate.

A proper motion blur doesn’t try to “hide information” but it tries to create MORE information between frames. It’s not perfect, specially with low sample data (the same reason why DLSS gets better and better if you raise the base resolution, more sample data).

Both DLSS and Motion Blur are solutions that try to create information where it doesn’t exist. One for visual resolution and the other for frame rate. But solutions are far from perfect but can be “good enough” in providing more information at less processing power.

Display motion blur have absolutely nothing to do with any of this. It’s just an counter effect of the nature of continous light from the pixels of LCD. The only “advantage” of this is to fix flickering. As always… A trade-off. Some people are EXTREMELY sensitive to flickering while others are not.


Since the article that I talked about on the OP is from Blur Busters. Arguably the most informative site about this exactly subjective, it kinda doesn’t make sense to argue that motion blur (resulting from sample and hold effect on LCDs) and artificially introducing motion blur in gaming engines (to solve stroboscopic and smooth the jarring nature of discrete images) are the same, because both are “blur”.

For some reason, you’re thinking that artificial motion blur “hides” information. It doesn’t (or shouldn’t). It just tries to interpolate information between two frames.

It depends how FAST this object (or point of focus is). Display motion blur doesn’t “remove” any information. The fact that you can’t see fine details because of it is the same reason why you can perceive gaps in different frames. Because your EYES are blurring things, not the display.

You said you don’t see “gaps” between frames when recording in slow motion. Well, yea… For the same reason you don’t see “blur” if you record your LCD screen. Your eyes are responsible to create that blur, not the display itself.

This is just the effect that something we already talked about here. Persistence of vision.

You can play at 60 FPS with an 240hz display using freesync and have almost the exactly same input lag as 240FPS/240hz because of the scanout speed, one of the wonders of freesync/gsync technology. Raising the FPS is not just about “input lag”. You raise the FPS exactly to get smoother images who carry more information, which you lead to easier tracking aim and faster flicks.

High precision input have NOTHING to do about the tick rate. The game was already simulated at your frame rate (the SIM number you can see using Ctrl + Shift + N). So it doesn’t matter if the game runs at 63 tick rate from your POV. If you can see and hit, the target will get damage. The engine will even roll back in time to ensure that you hit the target.

High precision input just get the input information straight out of the mouse input (instead of the visual simulation). If you’re using 1000hz polling rate, you can shoot at exactly 1000 possible locations in that sec, regardless of how much FPS you’re seeing. 8000hz (like the Viper 8k) 8000 possible locations/sec. It’s specially useful for flick players.

This is EXACTLY why I died as Tracer in the video above. I was blinking backwards in my POV, and the Mcree shoot there. Even if in his POV I don’t even started to move. Just go to 81GTYR and see for yourself. I’m playing as Zetron XL.

Not many games have this, which can be called sub-frame input. Reflex Arena is one of those games who have this to.

Still, it’s WAY off isn’t? The replay isn’t perfectly accurate exactly because what I explained above. Because it doesn’t matter if the servers runs at 63hz, you’ll be always hitting interpolated data. The benefit of raising the tick rate would create more “precise” data (and even more precise interpolation). But “visually” if you can see AND hit, it WILL register. Regardless if that player was there or not.

Not only that, you’ll hit interpolated data that you can’t even SEE in your screen yet. Because of high precision input exists.

Motion blur only tries to interpolate data between two visual frames. How that interpolated visual data would be less “precise” from what we already have simulated in the game?

You’re saying that you should have “pristine” jarring and precise frames because it would lead to more precise shoots. I think not.

The only way to have this extremely precise data is to have thousands of frames, displays who can show this thousand frames and tick rate running at thousand hz. Everything between will ALWAYS be interpolated.

If fine detail is so important to shoot. Then we should see the EXACT hitboxes of the players and not an silhoutte that are extremely mismatched from the actual hitboxes.

1 Like

I think you may be confused about the purpose of the feature. It has literally everything to do with tick rate. I suggest you re-read the article that describes it. What I mention it actually the first paragraph of the post:

The problem is that the simulation (tick) of the engine is fixed but the rendering is interpolated and variable. The location where your camera is facing and the recorded location of where you shot were seen differently between the two threads in the engine. Under normal slow-moving conditions, the difference between those two is very small, and thus you wont notice much of a difference. but during very fast flick shots, those two points can be very far apart. This gives a disconnect from where the user thought they clicked, and where they actually did.

You died in the clip above because the replay is not accurate to what happened, and he clicked your head quickly enough that the replay was unable to record it with a high enough precision.

The replay isn’t accurate because mouse input is not recorded at the same rate as on the client, therefore some data is missing and interpolated.

No, I’m saying you should have pristine precise frames because it allows your brain to see more detail. The higher the clarity and distinction of the visual, the easier it is to keep track of.

There is a pretty important distinction here. Visual representations of the meshes are always confined to the hitboxes. They are designed such that if you shoot at the visual, you achieve a hit, but then also give some leeway. Yes the interpolation and rollback does provide some inaccuracy here, but using per-object motion blur degrades the detail of the information, which in turns degrades the precision of your shot. By adding motion blur to the mix, you are effectively losing visual information that can help determine the current precision of the hitbox. The tradeoff is that the animation and movement will appear smoother, but smoother in this case reduces the accuracy of the data. Any rendered information that is post processed is, by definition, not accurate to the original mesh data.

Way to much GPU usage for less frames then my display can show. Not a good solution.

That are so much information already on this topic that I’m REALLY lazy to explain everything to you. If you’re not lazy, just see the links.

Everyone who have vision can see the phantom array effect. Your eyes have something called persistence of vision. You’re certainly seeing, just not noticing or you’re not looking correctly to the effect:

An image to help: htt ps://blurbusters.com/wp-content/uploads/2017/07/MouseStepping-60vs120vs240-690x518.png.webp

I’m not feeling hungry right now, so you’re probably not feeling hungry to.

It’s OK, you can turn it off. But I hope you know all the information I gave here because most people who disable this effect have no idea of how many ways of implementation exists and how many requisites you should have to make it look correct.

It shouldn’t if you’re not capped at GPU usage (GPU bound). Also, you’re giving WAY to much credit for SIM timmings. The SIM number is just to give you the value of the simulated world (interpolated from the tickrate of the server). For PROPER visual information you need high speed cameras and MSI Afterburner to check timings.

You keep talking about “jitter” and I’m not quite following here. It doesn’t matter what GPU you’re using, what matter is how much are you asking for both. You can use an R9 380 and GTX 1080, both at 50% render scale and absolute minimum graphics. In this configs, both of then should have enough room to your CPU push more frames. Assuming both would get the same value of framerate (let’s say 400FPS). Both of then with less then 95% GPU usage. Both of then SHOULD have exactly the same input lag/ SIM number and whatever.

Nvidia Reflex doesn’t do absolutely NOTHING if you’re not hitting 99% GPU usage. It only helps in GPU bound scenarios. If you’re already capping your framerate you’re probably not hitting 99% GPU usage.

Actually, AMD Anti-LAG are similar to NVidia Ultra-Low Latency. Reflex have nothing to do with both solutions, and is a much better solution for GPU bound scenarios.

And these three solutions are arguably useless without a GPU bound scenario.

I guess the word you’re looking here is Frame Pacing. Reflex would solve input lag issues but it doesn’t solve frame pacing issues. Another reason why you should have less GPU usage.

Neither G-Sync of Freesync add ANY additional display lag. However, using both solutions without capping your framerate properly DOES introduce lag. This is why you should cap your framerate slightly below the maximum frequency of the display. See here: htt ps://blurbusters.com/wp-content/uploads/2017/06/blur-busters-gsync-101-gsync-ceiling-vs-fps-limit-60Hz.png

GSync 101 on Blur Busters explains this very clearly.

Not only for that. It behaves almost exactly like RTSS if you set the minimum and maximum frame rate to the same value. Extremely consistent Frame Pacing (like RTSS already does).

It DOES have better frame pacing comparing with an ingame FPS limiter. But at the cost of slightly higher input lag. It’s great for games who doesn’t have FPS limiter OR if you’re looking for extreme precise frame pacing.

Of course. Because Anti-LAG have no point to exist in an situation that you’re not GPU bound.

Yes, extreme cases that you’re trying desperately to raise your Framerate. I wouldn’t bother. It’s better to just lower the Framerate and get consistent frame pacing at this point. You’ll actually have less input lag in this case.

To be clear, everytime you’re GPU bound, you’ll have MORE input lag. Reflex is the only thing I know that can solve this. But it is very easy to avoid this situation in most setups, unless you want every graphical fidelity option to the max, and in this cases, you’re probably already thinking to upgrade.

I’m not sure about that. I never saw any test saying something like that. On the other hand, at least Nvidia have Reflex.

Don’t get me wrong. I love AMD. My last two GPUs is AMD (R9 390x and now RX 5600XT). But I’m pretty that if both are working at the same worload (less then 95%), and both are running at the same frame rate. Both SHOULD have the exact same frametimes.

Yes, it depends on the engine. Talking specifically about Overwatch, it doesn’t hurt to get higher and higher. The “world” is simulated on your machine after all. Maybe it does have an ceiling but only the devs can know.

It is, but for gaming. GPU bound scenarios is essentially the most intrusive for input lag.

It is indeed awesome, you can see here: htt ps://blurbusters.com/wp-content/uploads/2017/06/blur-busters-gsync-101-60hz-60fps-vs-144hz-60fps.png.webp You can see this information on the Gsync 101 on Blur Busters.

Yep. It hurts me how much trash talk we had on this forum. There’s so much hate and useless discussions here that I’m always in shock of how bad this supposedly “inclusive” community is.

Not sure what you’re talking here BUT, if you’re interested in some way to play competitively Overwatch with a single hand for example, you can check Gyro Aiming solutions. Gyro Gaming is the best channel about that.

It doesn’t have “everything” to do with tick rate. The same way that the SIM number doesn’t have everything to do with tick rate. Both works as interpolation of some sort.

To be clear… If you’re using 250hz on your mouse and playing at 250 FPS, the high precision input would do absolutely nothing regarding your input lag or input range (it caps to 250 possible positions). But you would have a stuttering artifact based on the mismatch frequency of your display rate and the mouse polling rate (this stuttering gets lower if you raise your polling rate) but that’s it.

So, let’s say you’re playing at a 1000 FPS and 1000 hz polling rate. Again, that option would do absolutely nothing.

The only thing that this option do is to separate the mouse inputs in a complete separated “simulation” of some sort, instead of polling that mouse data to the next frame. That’s why “subframe input” is a good term for this option. In Reflex Arena, they call it “Responsive Input”.

A lot of games doesn’t have a solution like that. That’s why in most games, raising the FPS is such a big deal about input lag. In CS:GO for example, the game still works like old Overwatch.

In Overwatch, the input lag of the mouse AND the precise position of it is already tied to the polling rate of the mouse. The “output lag however” (what you see on the screen) still works like before.

Because you can’t react from what you can’t see, this increment in input lag is debatable. However, most important, it makes very fast flicks WAY more precise. But it doesn’t matter much for sniping for example. Still, a very good solution.

Actually, he could hit that hit while I was invisible in his screen. As I said. Sub frame input is a thing. There’s a quick video who shows this perfectly here: [Overwatch High Precision Mouse Test on PTR - YouTube](ht tps://www.youtube.com/watch?v=4WRuOdVPCKw)

But to be completely honest. I have doubts of how exactly it works. If you happen to see a target in front of you, flick the mouse extremely fast to a direction and click in the exactly moment to land the shot, it would hit. EVEN if this information didn’t showed on your screen yet. Your visual data will still lagging behind (because it doesn’t poll the input to the next frame anymore).

So, the question is. The SIM for the input itself works at whatever value the polling rate is BUT the hitbox will only “move” at your SIM number?

OR, the hitbox itself is being “simulated” at a higher interpolation frequency to match the polling rate of the mouse?

I tried to test this with some macros but I couldn’t. This is something I’ll need to research or send a ticket to understand.

My guess is. The simulation itself continues to “move” at the same rate as the SIM number. So the mouse input would hit something that is “stuck” in time. This would make this option rather pointless (again) without raising the SIM number.

I’m not sure. I hope that the hitboxes itself is being simulated at higher frequency. But I don’t have an exact answer right now.

I can arguee that a less detailed game would actually HELP with aiming. That’s why a lot of people doesn’t like Visual Clutter.

If characters on this game doesn’t have any texture, and was just red silhouettes all the time (like Widowmaker ult), you would hit targets just fine. Or even easier.

Again… How it would degrade the detail if it’s just interpolating data between two frames? You can arguee that this “extra detail” isn’t perfectly accurate and I agree. But you STILL have information about those two consecutive frames. It just happens that between those frames you’ll have a “blur” that glues then together. You lose nothing.

In theory, infinite frame rate and refresh rate would fill this exact space, the only difference would be the fine details (eyes, nose, clothes, etc).

You don’t lose the actual frame data, you just GAIN MORE information between then using interpolation and blending then together. How precise this information would be depends directly of how much sample it have and how “far” it would blend. Slow movements would barely show any difference.

Arguee that interpolated data is inherently “bad and innacurate” is very simplistic. Isn’t DLSS 2.0 incredible precise?

I would say that trying to get “precise” motion blur playing at 20 FPS is not even close to be good. But at 240hz/240 FPS? You already have a lot of sample data at this point.

You’re not “losing” information because there’s already ZERO information between two frames. This is the EXACT reason why stroboscopic effect happens to begin with. Because it’s a gap, a void with absolutely nothing.

Since we have sub frame input, I’m not absolutely sure how hitboxes are treated. Since we can hit things “faster” then what is being “simulated”. How we can be sure?

But if I can hit something between a flick (a flick faster then the screen are showing). How you’d know if this interpolation is tied to visual data and not to the polling rate of the mouse itself?

I’ll try to explain:

Actual tick rate (62.5Hz): ---------------- []-------------[]------------[]------------[]…
Visual data (240Hz “simulated”) -------[]–[]–[]–[]–[]–[]–[]–[]–[]–[]–[]–[]–[]…
Input data (mouse click at 1000Hz)—[][][][][][][][][][][][][][][][][][][][][][]…

I guess only Bliz can clarify this.

1 Like

SIM it’s not precise for sure, but can highlight the issue where 2 cards capable of handle the same game at same settings at 120 and 160 fps generated SIM times but also hugely hadware latencies discrepancies. Because each architecture has their unique perks. It’s well know that Nvidia GPU has higher driver and api overhead comparated to AMD gpus. That reflects how the whole system can handle several stuff at same time and the overall impact.

Jitter it’s any kind of variance performing the same task or similar task several times. To avoid jitter often we lock things to work at same pattern, we always have some kind of jitter in any sphere either by hardware clock reduction/increase or just how they were programable to perform after certain conditions being met. My experiment was on Overwatch at 1080p, 120fps, every settings on low and 100% render scale, texture on High, reduced buffering on and anti-alias low-FXAA both gpus were capable to keep framerates stable either with Anti-lag, Chill, Reflex or without any of those. The SIM numbers but also the actually frame times on R9 380 were lower both on FX-8350 but also Ryzen 7 5800x, than nvidia gtx 1080 did on those 2 scenarios. The test was performed 5 times on both machines with the same settings compiling all tests of 5 minutes on pratice range totalling 50 tests with Fx-8350 and 80 tests on Ryzen 7 5800x.

At 120fps

  • Anti-lag on R9 380 had 8-9.8 SIM times on FX and had 8.2-8.4 SIM times on Ryzen
  • Chill on R9 380 had 9-10 SIM times on FX and had 8.4-8.8 SIM times on Ryzen
  • Without the feature on R9 380 had 9-11 SIM times on FX and had 8.8-9.2 SIM times on Ryzen.
  • Without the feature on gtx 1080 had 12.2-14.8 SIM times on FX and 11.4-14 SIM times on Ryzen
  • With Nvidia Reflex on gtx 1080 had 10-14 SIM times on FX and 9-12 SIM times on Ryzen

At 160 fps

  • Anti-lag on R9 380 had 8-9 SIM times on FX and had 7.4 SIM Times on Ryzen
  • Chill on R9 380 had 9 SIM times on FX and had 7.6 SIM Times on Ryzen
  • Without the feature on R9 380 had 9-11 SIM times on FX and 7.8 SIM times on Ryzen.
  • Without the feature on gtx 1080 had 12-14 SIM times on FX and 10-12 SIM times on Ryzen
  • With Nvidia Reflex on gtx 1080 had 10-14 SIM times on FX and 8-12 SIM times on Ryzen

FX capped at 220-280fps with GTX 1080. The R9 380 peaked at roughly 180-200fps without any change so the test wouldn’t be great to keep the same settings so I stopped using the R9 380 also, so both hardware were ignored for the 300fps+ testing purposes. I already reached my goal to beat the fps target for my desired purpose at the time, later I explain the purpose at the time.

GTX 1080 at 300fps

  • Without the feature on gtx 1080 had 6-8 SIM times on Ryzen
  • With Nvidia Reflex on gtx 1080 had 2.6-9 SIM times on Ryzen

GTX 1080 at 350fps

  • Without the feature on gtx 1080 had 6-7 SIM times on Ryzen
  • With Nvidia Reflex on gtx 1080 had 2.6-6.8 SIM times on Ryzen

GTX 1080 at 400fps (instable)

  • Without the feature on gtx 1080 had 6-10 SIM times on Ryzen
  • With Nvidia Reflex on gtx 1080 had 2.6-8.8 SIM times on Ryzen

SIM it’s imprecise measurement but can translate real world performance when you actually playing the game while also other stuff are happening on your pc. I took care about keep the thermals under control and avoid any kind of throttle during the tests.

The test was conducted by playing with hanzo killing the 2 bots and an tracer hero flickering between each shot, performing a left side jump, using storm arrows shooting at will while rotating the screen, performing the right side jump and repeating the process during 5 minutes each experiment, compiling the amount of time of 25 minutes using FX gpu within an single session with pauses of roughly 5 minutes each experiment, swapping between 120 fps and 160 fps on R9 380, them swapped the gpu slots and started the pc again with gtx 1080 for more 25 minutes. Totaling 2 trainning sessions with fresh pc start between them, both situations had the same stuff open at the time the pc have some autostart stuff and all processes were made with 5 minutes interval each totalling including the interval between the tries and reseating the gpus I spent 2 hours of testing on the FX, doing 5 minute breaks plus 10 minute breaks for shutdown, swap cards, start up and waiting 1 minutes after logging in.

Then after a day I started the process with Ryzen picking both gpus and put in there.

I done that because I was planning to pass my old gear FX gear to my brother who had g4560 cpu and gtx 1050 so I took some time to make the machine proper work the same way would work for me and at least better for him than his current one, he would use the machine with less stuff than I did so the performance would be similar to mine using ryzen and his case would be better because he would use less software at same time and SIM metrics and FPS was a nice way to proper measure the time, so after I proper setted up the machine and delivered to him well optimized for his purpose. Which made him really happy btw and performed way better than I expected.

I use R9 380 and GTX 1080 for work purposes, so those gpus was what I had to test before to deliver to him the gear, without the gpus. His pc without gpu gone for his fiancé to organize her teaching lessons for kids during this pandemic and he kept the 1050 with the fx gear. While I upgraded my pc for improve my performance at work.

Technically you’re correct, but both tech are used to have similar results. The anti-lag feature recently was highlighted by AMD for that market purpose. Both tech try to solve the same problem in different ways, so it’s not “exactly” wrong to consider as alternative.

Their main market it’s for keep your card cooler and reduce overall power consumption but yeah, you’re right about it does more than just what “their market” suggests like on tests showed.

That’s true, not actually useless but minimal impacts for sure. At least when you use your PC for serveral stuff while you play.

The anti-lag it’s impactful, at least on R9 380 SIM stats. Anti-lag can be both a competitor to reflex but also to the old NVidia Ultra-Low Latency. The major difference on Reflex it’s dynamic and have a “suite” of stuff to control it more precisely, but also can generate inconsistencies in some scenarios like I had at 400fps performing less okay than 350, mostly because the framerate wasn’t stable. I tested the anti-lag feature because often AMD solutions ends being open source and I wasn’t sure about Nvidia settings at the time, so I only tested Chill,Anti-lag, no change, Reflex (because I had a driver update and the feature showed on nvidia gpu test). Even today I mostly play at R9 380 while I do some heavy test on Gtx 1080 because it can be programable.

The gpu overhead provides some impacts on overall experience, at least on SIM features also I noticed several times “weaker” cpus working better paired with AMD gpus because didn’t had to handle the api overhead from nvidia gpus. Not sure the current state of the 2k-3k series, but at least up 1k series that was a thing. Both Fx but also Ryzen worked better with R9 380 on SIM and response times than had with GTX 1080 at same frame rates. I work on game testing and often I need to simulate CPU bound and GPU bound scenarios, test some features and inconsistencies and several times I had the same situation were both gpus were meant to perform similar but because the api generated enough overhead had some weirder time differences even on lower demand tasks. When you have an overkill hardware for the task often you don’t notice but if you use the gear to play and it’s not “adapted” entirely for it, like full screen and not background processes. The extra time that cpu needs to handle with several stuff using the gpu at same time, multiple monitors etc, increases the overall time that got more noticiable when you enable reduce buffering. That was translated on SIM numbers and several tests I made in the past while often I multitask stuff.

Heterogeneous computing also it’s a thing that “somewhat” properly works on Windows 10, even linux doesn’t have a proper way to deal with it. Often folks don’t feel the overhead too much because their integrated gpu in the cpu can compensate doing part of the job, that started at DX11 on W7 and more broader adoption on W10. Most folks doesn’t know but having integrated gpu being properly used can increase your framerate and reduce a bit the input lag for certain tasks because can share a bit of the workload with discrete gpu and the paralelism impact can improve the overall performance, but only perkier scenarios like igpu and nvidia or apu+amd gpu. Also some scenarios it’s just negligible gains because often folks uses overkill gear to do certain stuff.

That’s a nice tip, some folks in the past had issue about mouse/keyboard in another thread. They asked about controller support with aim assist, which most likely wouldn’t happen, but another user said about a gimick they done with half wii/remote or joy-con and a tablet for draw combination. I saw in the past something like a eye tracking tech and so on. Anything that can be more inclusive it’s better for sure. Having special needs and being unable to experience the game while having a nice experience can be troublesome.

Tbh AMD have nice hardware, but their drivers aren’t great. Nvidia often almost the same hardware with way more optimized instructions and drivers. I hope someday Nvidia could do something nice for a change and provide some open source tool as AMD does, while AMD deliver hardware otimized at level that Nvidia does. Because users could benefit more from it. Maybe with Intel in GPU space could be a thing, more competition and better prices it’s a win for all users. Most of the nvidia overhead issues comes from the highly optimized and programable hardware they have, while AMD has just brute power that not always translate on better performance, similar Hypertreading/SMT methodologies vs more core count. Nvidia has several degrees of perks about how their gpus work, which are cool but also provides more overhead in time sensitive scenarios if you really pay attention to it.

I think you’ve misunderstood what high precision mouse input is for. I strongly encourage you to re-read that article I linked if you did not. If you have, I think you are misinterpretting it.

The purpose of the high precision mouse input has quite literally everything to do with tick rate. While the camera movement (for visual rendering) already uses sub-frame sampling of the mouse data, firing your weapon does not. Firing the weapon occurred only during the actual simulation tick of the engine, which is a fixed 62.5hz. If you are rendering the game at 240fps, it means that the vector of the camera at the time you fire can be quite different by the time the next simulation step occurs and detects/triggers the command to shoot the weapon. The high precision input feature allows the detection of the key press between simulations steps, to the accuracy of your mouse.

A mouse that polls at 250hz would still normally be limited to a 62.5hz polling rate from the engine without this feature enabled. It would make zero difference whether you run at 60fps or 240fps.

It is possible, but it is far more likely that the replay is simply inaccurate. I have seen worse on replays of very slow moving objects that do not travel across the crosshair position.

We do not have the inner details of their engine, but I would imagine they would interpolate between command frames, as this is required for rendering anyway. It will never be perfectly accurate, of course. It is a video game, after all, and video games are basically gigantic boxes of approximations and fudging of things to ensure they “feel right.”

I think you’re confusing my use of the word detail to be synonymous with fidelity. When I said detail, I am referring to clarity. My intended usage of the word would actually coincide with reducing graphical quality in order to achieve pristine, distinguishable edges.

How would it not? To do so you literally are altering the ground truth value with something that is a combination of both old and new data. Blue removes detail from the objects, which does things like making silhouettes more difficult to distinguish.

No, DLSS does not produce precise data. In fact quite by definition of what it is, it does not. It produces an approximation of what the trained AI believes to be correct, and it is not always right. There are some advantages in certain circumstances where it can produce what might be considered a “sharpened” looking image, but it is not precise, nor accurate to the ground truth image.

That’s not to say DLSS doesn’t produce good results, dont get me wrong. But I would be extremely cautious about enabling DLSS in a competitive fast paced title like overwatch without some serious data showing that the accuracy of the data is within an acceptable margin of error.

But that gap is what I want. I dont want to see an interpolated image between those frames, I want to see only the most recent image in it’s more precise form possible, at maximum clarity.

I’m not 100% sure I understand your question as you’ve worded it, but this is my answer based on my best attempt at understanding – feel free to correct me if I’ve misinterpretted:

The interpolation is all based on command frames, of what the server sees. The rendering thread has no bearing on how shots land, but is an accurate representation of the interpolated command frames. Aside from the server disagreeing with the client on what happened, a flick that occurs subframe from the visual rendering would effectively be shot at the interpolated location of those two frames based on when the inputs occurs.

1 Like

Exactly. Thats one of mine main isssues with it. The feature needs to be trained to actually represent the right data. The idea behind it’s great but it’s too painful to train the IA to have accurate results of every single object. They use half frames at half of your resolution and another half from their storage 16k data that often doesn’t have enough sample or uses real world scenario(most of the cases) instead of train the image in several quality presets at higher resolution. When you combine it with rtx reflexes you can get real world images inside the game while the in game image has worst quality than the the reflex even ignoring objects that are in front of it.

The showcase on bf they highlighted the flame in the car and tank, but on the train window reflex about an building the reflex were more realistic and ignorated the tree in front of it, in fact showed even a window that was covered by the tree in particular. Can be beautiful, but doesn’t mean they represent something real. Both techs suffers from the same caveat.

About high precision mouse input. They adapt the client increasing the cpu usage in the process. To accept the input at right timming you shot instead of gaps of 16ms as before. You will see where you actually shoot and that info would be sent in your tick rate to the server. If the other player was in the spot by server measurement will land the shot even if you don’t see in your screen, if wasn’t you would receive the visual feedback if your client showed the player at that spot you shot but wouldn’t deal any damage(high network latency scenario, were your client would try to send reduced ticks like 15ms to the server until stabilizes).They have some rulings about it, like invulnerability abilites have higher priority than shots and often overrides if they are in the same tick even if you see that you landed the shot and was explained in a conference about their architecture logic. I can try to find the complete video about it, they explain the overwatch internal system, while high precision input it’s just an acceptance of your input in any valid moment on that tick rate. Instead of limited parts of that tick rate with the feature disabled.

To be clear, official explanation about both the SIM, High Precision Input and even Reduce Buffering (although this can be easilly understand as max rendered frames 1)a are VERY confusing. I remember trying to get information about this years ago and even the Blue Posts was not sure.

If it works like you’re saying (as I understanded), then it’s STILL an advantage looking for higher SIM numbers, you’ll would have a higher chance to hit something moving faster, specially in a linear direction (like Doomfist M2) as interpolating this type of data is quite precise.

That’s why I linked the replay code. It could mean nothing TBH, BUT I’m pretty sure I blinked backwards from my POV at that moment.

So. More visual interpolation would actually help isn’t? At least the “trace” from the motion blur would represent better the moving interpolated hitbox at the rate of the mouse (even if the trace itself is delayed, but the border of the object would be not.

Of course it would never be perfect. Unless we got refresh rate AND frame rate working at the exact frequency as the polling rate. So it wouldn’t matter if your visual data is interpolated or not. It will just means a less precise “fidelity” about the real inputs from all players. But an visual hit in your screen would always be techinically a hit.

The edges of a target (which makes the target silhoutte) would still be pristine and distinguishable. And slow enough targets to hit would doesn’t even had almost any visible “blur” trace. The idea is just to render a better idea of how the target is moving. Specially when it’s changing directions.

Although if will actually HELP or not to hit targets is STILL an speculation. We have no idea without the chance to test.

I recorded Prey using the idea I want for this Overwatch. Zero (or almost zero camera blur), and per-object blur for targets. [Prey - Motion Blur Experiment - YouTube](ht tps://www.youtube.com/watch?v=NHQecwef22A)

This gameplay was recorded at 100 ingame FPS (and capture as 60FPS). In 240Hz/FPS it would have more fidelity (because a higher sample rate).

If someone here have Prey installed, just add the r_MotionBlurCameraMotionScale = 0.0 on the game.cfg to have the exact same config.

When a frame changes, the old data will still be showing in your eyes anyway because of the persistence of vision. You still can focus and hit the old data because of that. As I said (ant there’s studies about that), our eyes hold the images for about 1/10 and 1/15 of sec.

You would be technically seeing all the frames at this time as a bunch of discrete images. Hence WHY we have strobing effect.

It doesn’t matter HOW fast your screen holds the image. The only difference would be a more pristine “discrete images” when you have faster sample and hold effect (CRT, ULMB, higher Refresh Rates).

The motion blur only tries to interpolate the data between this discrete images so you don’t see multiple images and gaps. As you would see with a higher amount of frame/frequency.

You still can hit the “wrong” target silhoutte regardless. Your eyes will still be blurring everything because they’re always working like that.

Think of this way. EVEN at a sample and hold effect of just 1 millisecond (ULBM), how in the hell you can “distinguish” a discrete image fast enough to even perceive it? I tell you. Without the persistence of vision (your natural eyes “motion blur”), you wouldn’t.

The difference would just be that without blur, this “blur” would appear as discrete images because targets are always moving at discrete steps (because we don’t have infinite framerate data). And with a motion blur applied, it would appear more “connected”. The higher the frame date, the more “precise” would be this connection and more pristine looking. So it’s always good to raise the frame data.

You’re WAY to focused on the visual technical information and is forgetting that we see all this data with HUMAN EYES.

If persistence of vision didn’t exist, I’ll not even trying to make an argument here since strobing effects would not be happening to. The strobing effects happen in our EYES, or ANY camera with about 1/15 or 1/10 shutter speed.

Precise enough would be a better word then? It is not THAT far away from the true. Hence why people are using anyway. The same can be said to almost any graphical filter TBH. Isn’t anti-aliasing making games less “precise”?

The question is. Would you have a more “precise fidelity” with a LOT of aliasing or a more “polished” and “less precise” visuals with less distracting aliasing?

An frame with a LOT of tearing is more “precise” because it carries less input lag (more updated frames). However all that tearing can be very distracting. Would you rather pay a smaller input lag price to get rid of that tearing or not?

How much this distractions can mess with your ACTUAL reaction time?

it’s OK, if it doesn’t annoy you then there’s 100% subjective. But remember that you’ll still be seeing a lot of “old frames” because the persistence of your vision. Would you be flicking to the right “frame”?

I mispelled “frame” as “flick”.

The question is. How fast is the frequency of this interpolated data? As fast as the polling rate of the mouse itself?

It isn’t. It’s just a solution trying to be “good enough”. Like a bunch of sacrifices we do because the lack of processing power.

How much of this “fake data” would be wrong enough to be a problem I’d say none of then. The thing that is affected the most is very fine details (like little words on a paper).

For the characters itself and the actual hitboxes. It wouldn’t be much a difference. At least, worse then 1080p or less precise then 1080p it wouldn’t be.

The only thing that WOULD be a problem is the introduction of visual artifacts. The blurred words on a little paper on the ground is one of then. You can “see” that it represents words, but they’re not precise enough to be read.

But I can arguee that if it shows this way, it’s already NOT readable on your current resolution.

I’d say if it’s better then a lower resolution without that, we already have an advantage.

Is a valid moment an interpolated one? If it is, how’s the frequency of this interpolation?

If the tree on the BF scenario would prevent a shot being landed in a character on that particular window. Would mean really annoying feature tbh. That’s the issue about using AI, you need feed the data or you would use “not realistic” artifact in the game. If the reflex can provide visual feedback of a player or npc, can impact badly if not show any object between them.

For the characters itself and the actual hitboxes. It wouldn’t be much a difference. At least, worse then 1080p or less precise then 1080p it wouldn’t be.

The only thing that WOULD be a problem is the introduction of visual artifacts. The blurred words on a little paper on the ground is one of then. You can “see” that it represents words, but they’re not precise enough to be read.

But I can arguee that if it shows this way, it’s already NOT readable on your current resolution.

I’d say if it’s better then a lower resolution without that, we already have an advantage.
That means having half of their pixels being based on low res and another half being based on supposed info about that character at 16k. The issue with that kind of trick it’s the lack of emersiveness if you don’t feed the data with enough parameters and data to actually have similar results than real experience at that graphical preset, instead of guess.

It’s a video on ECS at GDC conference later I can try to post it.

Perhaps. At least, it’s good to have the options. Let’s see how the AMD FSR would be =).

Thanks! :smiley:

I agree, FSR if I remmember correctly it’s about a combination of Post processing and Anti-aliansing tricks. It’s more complete than that but I’m almost sure that doesn’t relies on AI.

Overwatch netcode
https://www.youtube.com/watch?v=W3aieHjyNvw

Overwatch post about high precision input
https://us.forums.blizzard.com/en/overwatch/t/new-feature-high-precision-mouse-input-gameplay-option/422094

I would say about 20 minutes he begins to talk more in deep about the system then you could compare about the high precision input

In the original trilogy of Star Wars, the AT-AT and AT-ST walkers were filmed with movie cameras with Vaseline smeared over the lenses. Given how oddly they moved with the 70’s and 80’s era animation tech back then, and the resulting blurriness of the entire scene whenever a moving AT-AT or AT-ST was being shown, I’d prefer not to mimic that particular effect by smearing Vaseline all over my monitor.

Thanks, I’ll look at!

Well, that’s INTERESTING!

No, visual interpolation would harm clarity, because you’re modifying the truth with something that is not the truth. You are blurring data and removing visual clarity. It may look nice and add to the visual fidelity on higher settings, and smooth out motion, but you are modifying the image from it’s true value.

How so? In order to fill the gaps you’re referring to, motion blur needs to draw outside those edges. The act of adding motion blur is going to modify the precision of the data.

You seem like a smart guy who’s done a lot of research, but i fear you may be incorrectly attributing a specific solution to an unrelated problem (i.e. thinking this will solve something it wont). There is honestly nothing in this game that moves fast enough that you cannot determine it direction, with maybe a very small exception if you’re like point blank up against a tracer blinking past your face.

Your inability to track heroes as they change direction, for example, has nothing to do with the few pixels of space between two frames as they change direction. Filling that gap with motion blur wont be some profound improvement that will suddenly allow you to track enemies you couldn’t. The cause is your reaction time. 170ms is not a bad reaction time, but it’s still a lot of time when you’re talking about movement in a fast paced game. In that amount of time, a player can move almost a full meter in game before your brain can visually react at all. And that 170ms is for things you expect. An A-D strafing target trying to trick you, your reaction time will easily be higher than that on average. The distance the player will travel in that time is going to dwarf the effect of the small few-pixel gap from the effect you’re referring to.

I’m unsure how adding blur would help this situation, though. A motion blurred object will simply be a persistent blurred image from the previous frame. It is not going to allow my eyes to focus on a more pristine version of the new data as result.

…and less precise. You lose detail by doing so. Your brain does not need visual connection between two things to track movement.

Depends on the technique. In some case, yes its just a blur (like FXAA). MSAA, however, involves sub-pixel sampling, and is thus more precise than no anti-aliasing.

The first thing i do in basically any shooter is turn all the graphics settings to the lowest. I play at 1440p, so aliasing is never really an issue. Jagged lines from aliasing usually dont bother me – Depends on the game, though.

Tearing is not more precise, it is actually less precise because it means your output image is a combination of two (or more) rendering frames that mismatch one another. At 240hz, the input lag caused by enabling a technology such as gsync or a frame limiter is typically not noticeable enough to outweigh the benefits of no tearing. That said, at 240hz, tearing isn’t typically that bad even without it. Vsync is an absolute no, though.

The rotation of the camera and the result of the rendered frame and (with high precision input) the direction you fire will indeed be limited by your mouse’s polling rate. At 1000hz, that is 1ms which is insignificant for all practical purposes. You would need an insanely fast flick to an extremely small target for 1ms to cause much issue. And it’s going to be 16x better than not having high precision input enabled at all, which most people (even pro players) dont really notice all that much.

But as I said, again. You’re still not accounting for the persistence of vision.

[Persistence Of Vision - YouTube](ht tps://www.youtube.com/watch?v=bcstc1ozczQ)

See the above video. See how you perceive multiple balls at different colors. Would you target for the middle one, the one on the edge or the one who’s falling behind.

Your brain ALREADY holds old information at a single moment. For about 65ms to 100ms of information. You’re way to focused on “clarity” and still not accounting that clarity itself is already being “hurt” by your own persistence of vision.

The difference is that one way to see the effects of persistence of vision leads to stroboscopic effect (like the video above and gaming in general) and the other has interpolation on the middle of this gaps in order to make the motion more smooth and connected, or seeing videos with a proper shutter speed.

The only difference is how your brain is interpreting the data gathered by the persistence of your vision. Stroboscopic effects for ME is extremely unnatural and distracting. For most 3D games that isn’t fast paced enough, it doesn’t get to a point that bother to much. The same can be said by good/old scrolling games. We have an “maximum” scrolling speed that will not be high enough to bother.

You don’t have a lot of “precision” with this discrete images as you would think. It could be less “distracting” to you. But that’s how your particular brain interprets and focus this visual information.

Only in the gap between those two frames. A place that it is already a void of information. Mostly of time we can’t even hit a target that’s not moving slow enough though, unless we flick. And in a flick, we’re not actually seeing anything at this given moment. We’re just reacting and using our muscular memory.

Thanks! I’m curious enough I’d say. I can say the same for you! This community and forum would be great if we have more discussions like this.

The first argument here on this topic was always about the stroboscopic effect, and it indeed solves this effect at some degree. That’s one of the possible solutions. The others would be having a LOT of framerate and display frequency, to the point that any movement on the screen would never be high enough to trigger visibile gaps. You’ll need THOUSANDS of frame rate to that.

The whole argument about if it is a “possible” competitive advantage is just speculation. Seeing how stroboscopic effect hurts my eyes it could be one for ME and not for YOU. The same way that vsync on can be a competitive advantage for you, or higher resolutions, or whatever setting that translates as an “trade-off” to fix a problem while generating other.

You get to a point that it is just “good enough”. Maybe, stroboscopic effect will stop to bother me at 360Hz, I don’t know yet. I bought a 240Hz display exactly looking to solve this effect. 240Hz is not enough, and, AFAIK, even thousands of frames will probably not be high enough.

If you in a given moment is focusing on the crosshair itself, you’ll would perceive an enemy on the background with a lot of stroboscopic effect. You don’t need to believe me, you could try it for yourself or see the article that I posted here a bunch of times already: [The Stroboscopic Effect Of Finite Frame Rate Displays | Blur Busters](htt ps://blurbusters.com/the-stroboscopic-effect-of-finite-framerate-displays/)

Maybe you’re the type of player that doesn’t focus much on the crosshair, instead, you focus more on the moving target. In that case you wouldn’t perceive a lot of stroboscopic effect on the target (but in it’s surrounds).

The thing is. You’re way to focused on the technical aspect of this discussion and is forgetting or avoiding the discussion about psychophysics of vision.

I don’t have a lot of problems tracking targets or playing the game. You can see me playing at the above Replay Code. I’m just saying that those problems CAN at some moments be annoying enough that will potentially hurts your reaction time. It can be minor, it can be major. We don’t have the data to know, it isn’t an entirely objective measurement.

At this point, talking about an “possible” advantage regarding an competitive nature it’s just speculation. Because we can’t “measure” something like that, it involves the very nature of psychophysics of vision and the BIG variance in human genetics.

The most thing I’m interested at this point is to minimize the stroboscopic effect, I (personally) don’t believe that this option would hurt my performance at any degree, but unless Blizz implements it, I cannot test to say if will or not. Maybe after all this, if they bring in the future, I’ll update this topic and say.

BUT, I’d rather pay the price for a possible less “precise” experience if it means I’m getting rid of this artifacts. The same way I’m inclined to sacrifice a little input lag advantage for less tearing, or a less precise gaming “fidelity” for the sake of higher and consistent frames. It’s all about subjective choices.


It’s obvious that stroboscopic effect doesn’t bother the majority of the population.

We can have an analogy about color blind people. The fact that you using some of the color blind options make your own experience worse is not a good argument against people who are color bind.

I can’t make you “see” like I “see” things. But this is a studied phenomenon. How much it WILL or NOT matter in an real scenario, its just speculation.

Exactly, and that’s why the lack (or not) of the gaps on this movement alone would not matter much in an possible “competitive advantage”. In the OP, the only thing I said that this is distracting and can induce motion sickness, which is true for some and not for others. I never feel motion sickness with games who implement motion blur. Should I arguee that motion sickness doesn’t happen just because it doesn’t happen to me?

For those that feels motion sickness, turning off an motion blur effect would actually HELP with their experience isn’t?

So, it can be considered an “competitive edge” for those who get bothered by motion blur effects to turn off the effect? I’d say yes.

In terms of input lag it is. It’s not a HUGE difference but it does exist and it’s measurable ht tps://blurbusters.com/wp-content/uploads/2017/06/blur-busters-gsync-101-vsync-off-w-fps-limits-60Hz.png.webp

I would say that less input lag is always more precise in a game. However, tearing can be so much annoying for some that this very VERY tiny input lag advantage cannot be justified.

We already have a lot of ways to implement Vsync without additional lag or at least not a big input lag hit. You just need RTSS and partial framerate cap. This was discussed years ago on some forums, now we already have an proper tutorial here: [HOWTO: Low-Lag VSYNC ON | Blur Busters](ht tps://blurbusters.com/howto-low-lag-vsync-on/)

This is specially interesting for those who love motion blur reduction solutions like ULMB. Because tearing is even more annoying and noticeable in a lot of motion clarity.

In terms of input lag yes, but for flicking situations I don’t think it isn’t. [Of Mice and Men: Is Finalmouse’s 500hz polling really enough? - YouTube](ht tps://www.youtube.com/watch?v=mcsbDDyfLLU)

Yes, very fast flicks. But more important, you would have more “headroom” to hit this flicks.

For tracking situations it already isn’t, since subframe input only happens at mouseclick event. But still, it does indeed lead to an competitive edge at some sort. Not big, but it does.

I believe this is old data back in the day when gsync required vsync enabled to work (in fact it was even enforced in the driver, if i recall correctly). Though I admit I’m a bit out of the loop on this one in its entirety.

Gsync today, though, does not require (or at least, shouldnt be using) vsync and is most definitely not over double the input lag as the chart on that page suggests :slight_smile:

It does have a very minor hit, but it should be fairly negligible, and only applies when you go under the refresh rate of your display. If you go over, gsync shouldn’t be active and you actually have tearing.

Perhaps you are just thinking about it differently, but to me, my usage of precision in the past has been with respect to the graphical rendering or the image. Input lag does not influence the rendered pixels. It is a side effect of the settings you’ve mentioned, sure, but personally I would not say it has anything to do with the precision of the image itself.

For sure. More is always better. :slight_smile:

Oh quite the contrary, actually. In fact prior to the high precision input feature, your camera movement was the only thing that did use sub-frame measurements, because they always resample and flush the input queue just prior to rendering the frame. Tracking is, in fact, the best scenario.


Also I must apologize. While this discussion has been very interesting, it has also consumed a lot of my time. Your responses are quite long and also take a long time to respond too (which is also why I typically do not respond to everything you say). I try to pick out the most interesting parts, otherwise I think we’d both end up writing some pretty thick novels :). Don’t misinterpret though, your responses are long because they are also thorough, which I greatly appreciate. It’s rare that someone actually cites sources with a lot of their claims, and I have a heavy respect for that. You probably know far more than I do on the subject, so while I still will disagree on several of your points, I admit I don’t have any real evidence that backs up my points. For now I suppose we’ll have to agree to disagree as to whether or not there could be any real competitive advantage. And maybe you’re right, and it could help for you but not for me, who knows…

Have a good one!