Say f is a function that takes as input the length of the right stick in inches, and yields the time in seconds that it takes your thumb to bring the stick from at rest (0 degrees of deflection) to full deflection (90 degrees). Any veteran can (willingly) verify that full deflection yields the fastest camera turn speed. Furthermore, said veteran can (slightly less willingly) verify that, with a sufficient degree of thumb dexterity, you should be attempting to maintain maximum camera turn speed for the full duration of the reticle adjustment (to the target), and upon reaching the hit box window you should be attempting to bring the stick back to rest/0 deflection, thereby resulting in a hit marker upon a simultaneous trigger pull. If done correctly, you will feel as though the camera’s motion is akin to a “flick.” Perhaps of da wrist? Not really, because it’s your thumb, but maybe on a mouse?
Imagine for a moment what might the resulting curve look like after plotting f()’s output on a Cartesian plain? With distance in inches along the x axis (and window [0, 1]), and time in seconds along the y axis (and window [0, 2]), the curve looks similar to the image that high school precalculus teachers try to associate in their student’s minds with y=x^n. If you’re not a nerd like me, what this means is that on a mouse, x = 0 and y = 0. On a console controller with no stick extender, x = 0.2 and y = 0.05. On a console controller with the longest reasonable extension, x = 0.4 and y = 0.15. I can supply the actual graph if any of you helpful constructive members of the Overwatch community would like to critique my figures (or my claims in general), but it would be difficult to meaningfully argue them as no matter how you clock it: the range of x values that stick extenders achieve correspond to a range of y values that sit along the part of f()’s curve where its growth visibly starts becoming unwieldy.
You may say that the difference between a startup time of 0 seconds, 0.05 seconds, and 0.15 seconds is not noticeable if you are a seasoned player and able to predict enemy positioning. You may say that the technical aim proficiency of the player plays a much larger and more significant role in overall accuracy than this smattering of seconds. You may say stick extenders give you more precision regardless.
I say you’re not wrong. But level with me for a moment about what high-octane top tier Overwatch gameplay looks like. Team composition and execution make it abundantly apparent that once they take control of any given CQB room, they have full expectation of their McCrees, Widows, Tracers, … to be able to hold down that room regardless of their reticle/camera’s orientation to whatever plethora of entryways the room may offer at any given time. Consider for a moment the worst case scenario where your target is 180 away from your reticle. Roughly clocked, and on the steepest aim curve that console settings allow you to achieve, from the moment you become aware of the enemy’s location, you can bring the reticle inside of the enemy’s hit box no quicker than 0.21 seconds. Plus an additional 0.05 seconds because you’re on console and using a controller. Unless of course you have a disability and need to use a mouse and keyboard despite needing to be on console to play with your friends.
This is the reason for the nasty rumor that 4000 SR on console is 2500 on PC. End-game-play looks so different between console and PC because these commendable gamers are working with vastly different windows of opportunity, a concept that any veteran Overwatch player can (willingly) verify contributes greatly to the tactics that define end-game-play such as, but not limited to, playing corners/tight long-range lines of sight/Neji-level awareness.