Damage nerf would work except they’ve got one shot kill heroes, you cant nerf their ability to get kills without screwing with their viability. I mean, that’s probably why one shot kill heroes have normally been the most hated since they disregard the concepts of tanking and healing but that was the devs choice, basically that’s why a global damage nerf would need a huge amount of reworking
He is not bad at everything. A bunch of stuff was listed that he was good at. You cant go “he has a strong ult, a mobile barrier, high damage, a solid CC, and a great absorb mechanic for pushing in” and then because they made the barrier weaker go “he is now bad at everything”. That makes no rational sense.
Do I lament that I can no longer creatively use the shield to negate certain abilities? Sure, but I also accept that it was a problem area and there is a difference between a strength and a problem (see also Orisa’s old Halt being a problem for future hero design) and when something is a problem it is going to need to be nerfed.
They are exceptionally inaccurate in GM. GM is the area where they would be most inaccurate. I still remember the blue post when Soldier and Genji were 12th and 13th on Overbuff for winrates and the GM is like “actually they are 6th and 7th and doing quite well in terms of win rate for that time period and I have the actual stats in front of me”.
Because you are dealing with a MUCH smaller pool of players, (usually you know who they are or they are someone’s alt which often is hidden), you are much much much more vulnerable to being skewed by those who do not report which is why there is an actual science behind getting a random sample.
For example, if I got a random sample of players from these forums that would not be a representative sample of players from the game. Similarly a sample of people who have their profiles public, while closer to a sample from the game, is not entirely accurate.
Sorry its not perfect but I believe its more accurate than lower unless you can prove otherwise with more than an anecdote. You need less than 5% of a sample to draw good statistical sampling.
Blizzard outright stating the win rankings of certain heroes that completely shatters the notion that Overbuff is more than anecdotal it is concrete
Also, for a basic version of statistics 5% of a sample can very easily not be a good sample if its not representative of the population as a whole. Furthermore, even IF you could manage a valid sample base (again Overbuff very much so does not and if you want I can break down for you why) you still have a margin of error of about ±4% for a medium sized model and ±5% for a smaller sample (see the lower picked heroes on Overbuff in GM).
This, among MANY other reasons, is why Overbuff win rates are generally considered trash by any decent player when it comes to understanding the meta. You can look at pick rates because even with a margin of error you get a pretty solid idea, and more to the point they better show WHAT is happening and any decent player could tell you why its happening.
Naturally - 2 college classes in Statistics however says in this case it is a decent enough sample relative to the population it covers. You just don’t need that much and no reason to believe its not a decent enough sample size for each tier randomly. Its not perfect, but its not utterly dismissible either. BTW the error rate is relative to the sample size, not the pop.
Oh but please, break down why its not a valid sample base, I’d love to hear it. Again, it can’t be equal to Blizz because they have the exact statistics; something no outside system can have. OTOH, its not useless by a long shot. Unless you can prove me to me more than say 80% of the players at any tier are private profiles, its good enough for general analysis.
Don’t care about the meta, nor do I care about win rates in particular.
Yeah…as I was typing I kept hearing a voice in the back of my head saying, “Well this doesn’t really take into account the snipers…” but I was stubborn and didn’t listen.
In a perfect world, I think I’d do away with the ability of being one-shot in Overwatch. I recognize this is a pipe dream, but I’m pretty sick of being instantly deleted by a random Hanzo arrow, or a Widow grapple shot (as satisfying as those are to hit myself). Or even Doomfist, but that’s another conversation for another time.
Much of OW gameplay is about positioning, and I feel that it’s becoming so difficult to know if you’re in a good or bad position before it’s too late. Team fights are getting more and more chaotic - it would be nice to have the opportunity to react to things rather than just die to them.
Fun fact, I can probably name those two courses and what they taught you and it is unlikely that it is anything useful to actually judging the validity of a model.
I could give a long explanation based on how statistical models are built and the nuance of why such a model is not a valid model, but I will instead take a simple approach and first present you with a inarguable fact: if you are only modelling people who choose to make their profiles public, you are only capturing a person who willingly choose to supply that information. Now what you would have us to do is assume that is a fair sample size (and that the model showing them at 13th out of 15 when reality showed them as 7th out of 15 was not worth noting).
However, the problem with that assessment is it make the same mistake that online polls do, even ones responded to by 10s of thousands of people (though I can name a couple that know how to parse their data to avoid what I am about to spell out but Overbuff DOESNT) and that is that you are assuming a constant distribution among those who report and those who do not. Any basic 400 level course on population modelling will tell you though that this does not work.
Now, if you are having a normal reaction you are going “okay well why doesn’t this work?” and to answer that we must dig into what types of people would not have their profile set to public in GM (I am going to use our topic as an example but similar problems arise when gathering from a larger population which is why multiple questions are often asked in order to build a reliable model). You have the introvert or the person who wants to keep their information private (nothing special here). If this were the only thing to worry about it MAY be semi reliable though you could reliably assume certain heroes are more likely to be played by that personality type, but this would not influence win rates and is likely not a large swath of the GM player base (but again we don’t have actual numbers for it which is going to be a problem with a lot of the data you try to gather).
Moving onto the next, and the more interesting, group you have the rather significant group keeping their profile private to avoid harassment. Now the type of harassment that is most common in GM is being harassed for not doing well with an off meta pick or seemingly underperforming with a meta pick. Now for the former I can tell you right now that the people who do this are either learning a meta hero (but because they are learning they would be losing more and in theory pushing the actual number down from where it displays) or they are semi successful at being off meta GM players (of which there are several in GM and most I have found do not let their profiles be open to both avoid harassment and avoid having a dejected team). The latter of course pushes the win rate up. Combine both of these and suddenly you can account for a difference that sort of ruins the model.
These are just single examples, but one of the things I do is work on various learning algorithms and one of the first questions you have to ask when your learning algorithm is not making the correct conclusions based on the data you fed it to set it up is obviously, “Why?” That seems like an obvious question to ask, and obviously margin of error is in play, but how large would that margin have to be (knowing that it is rather unlikely to have not one but two separate heroes falling at the upper end of it) in order to properly account for it? However, sometimes you have to go deeper and try to find whether the assumptions themselves are faulty (or more commonly whether you are not accounting for enough things).
Overbuff accounts for literally nothing and is just raw unfiltered data which has minimal value as such its data cannot possibly maintain an air of credibility and the only time it was checked against Blizzard’s data the actual was so far off that the margin of error you would have to create to account for this would break the model itself.
Which is at least in any average match 2-3/6 people and more the higher you go - waaay more than enough.
I am not saying its perfect, but its far, far larger a sample than needed to draw fairly reasonable results.
Please link to me this information for comparison. Now, where I will agree, is in some of their “rankings” as being some weird formulas they use. I will agree it can’t hold up to a refined statistical model, that in turn doesn’t mean its utterly useless either. Sure it will deviate from Blizz, Blizz has all the data. OTOH, Overbuff has enough to draw fairly reasonable understandings and is often correlating to what people expect for pickrates, winrates etc. in general from nerfs and buffs. Its just plain more reliable than the standard you have adopted. I can see why to a “statistician of your caliber” that its “useless” as it wouldn’t pass muster in a high level collect class, but its still been proven over time to be accurate enough for general use.
Lol you predicted the future! Tho to be fair it’s not hard to predict the future considering the history of the devs balance history especially with tanks. This nerf was so stupid.