sort of.
Having a poll check the populace distribution of something is different than checking if a variable had an influential reaction. The breakdown of what the stats indicate is generally associated with a call to action based on the stats; as such, some of the thresholds for interest can vary enough to make it harder to distinguish if something is an outlier, influenced by noise (or placebo), or influenced by skew (cuz of the sampling)
Since both sites sample from the same source, at present, the sort of differences evident does indicate potential trending differences in the sampling, esp when applying filters.
Similarly, stats like these would be better viewed as over-time samples – rather than as snapshots per the last update – that way it could map trends of change, rather than post the current numbers. Topical responses tend to look at the as-is number, and not the change% unless it’s really big (and big posts tend to be from smaller samping influence)
Part of the issue of a stat-reading is that those are particulars people overlook because stuff is “good enough” to demand their call to action. When the call isn’t met (sometimes as an influence over the deviation from source and sample) then they get upset because they thought the numbers posted were “good enough” and thus display mistrust to anything else that doesn’t agree with them.
A frontpage look at hotslogs, though a small sample, puts DW @ 63% w/r;
for HP, he’s at 58.93%, filtering out the lower levels does bring it up tp 60%. but the overall data point there exceeds 100%, thus indicated a direct example of potential skew; at a wider date range (two more minor patches) brings the skew up to 113% influence on deathwing.
- Is the winrate influenced because the sampling exceeded 100%?
- Is duplicate information a factor?
- Is the winrate not influenced, but exceeding 100% just a display error?
- If the trend from overtime has persisted downward, is the concern of dw just one of exposure (high ban) and he’s not as egregious as initial stipulations posted when seeing 60%+ rates because people finally adjust?
Obviously there’s the topic that stuff was confirmed that changes were going to happen; however, if people look at the high spikes, they may be expecting a far bigger change than may happen if the w/r was a higher peak because of the skew; if the change is below their expectations, then you may see reactions fall into camps of “oh, blizz is just showing how p2w he is” or “they have no clue what they’re doing,” “they’re always bad at balancing” and so on.
Deviations have influenced deterministic deduction: how people deal with deviation isn’t as direct as they demand.
The other point of concern is that the ‘best’ mapping these sites do is usually when there aren’t any filters applied, or, if it’s only filters that can accurately be mapped. When people start filtering for diamond+, these systems have to guess where the ranks fall, and distribute that over the population they’re mapping. As an example, Hp listed my storm league seasons to be in gold, but… I finished in diamond. It has to estimate the starting mmr, influence from season changes, and only estimate in proximity to those it matched with; my lower rating may pull others matched to me downward and thus exclude their data from the filtered trends despite that actually being within range of the sample. So how does that influenced talent percentages compared to the source?
So between sampling deviations, filtered skews, and estimation guesswork, it makes it harder to make direct calls to action based on specific values. One range of dates can see a hero exceed 56%, and others can see then 5% lower; some players consider a few constraints blizzard uses for balance, but otherwise neglect other ones.
And even then, a lot of the calls that are suggested could be made without needing to even reply on the stats at all; if something ‘seems’ too strong, then suggestions to bring it downward help; similarly, posting % responses doesn’t help those that have issues, but rather direct examples show it.
Generally, the biggest indicator may just be the population distribution; a high pop indicates a stagnant meta, so things will likely change to deter the popular picks – even if the stats don’t agree (as can be seen with some dev notes where low-success heroes continue to be highly popular) ; some stuff people call UP, others consider OP and then choose to argue over which to disregard because they argue over the deviations.
The last little aspect here is that some people content themselves to reinforce the stats presented; sometimes its an excuse to give-up, or not try as hard, other times its just a source of frustration that stems directly from the number shown, and not the other factors involved. Because some metrics of balance intent was conveyed, some of the players (even the ones that don’t post here) take that to influence their understanding of the game.
So some of the deviations have influence on expectations and that directly influences how people conduct themselves.