And you’re consistently failing to do so.
Taking my experience into account is not a bias if that experience is interpreted correctly/reasonably.
However, I won’t even try to dwell on this any longer. Maybe another day.
But what is really, really pissing me off is the fact you, who prouds on being “objective” to the bone, fails to take into account an inherent behavior of numbers when interpreting statistics with high differences in their samples.
If a deck is played 70 000 games, I can have a score 35 500 : 34 500 and it will be a 50,71% deck.
If a deck is played 700 000 games, I can have a score 350 500: 349 500 and it will be a 50,07% deck, which is 0,64% less even though the difference between the number of wins and the number of losses is the same in both cases (1000)
Now, here, deck 2 is played 10 times more than deck 1. In the example of Rainbow shaman from last report, when it was played 1%, it was compard to decks with over 15% playrate, which means that the difference in winrates is even higher than 0,64% just because of the order of the magnitude of games played.
How can you compare apples and oranges and say the results of your comparison are objective data? You can’t. The only thing objective about that is how wrong it is.
I don’t need personal experience to take this into account because I’m educated enough to know this. But if I didn’t know this, I would at least have my own personal experience to rely on to know that the report is objectively wrong.
You, apparently, lack both
P.S. And that inherent property of numbers is the reason why popularity * winrate is a more objective measure of a deck - this way you cancel out the division with multiplication and congrats, you’re finally ready to compare the data on a similar scale, for a change.
Other, more tedious method would be to put them all on a logarithmic scale. Either way, these data NEED to be normalized before comparisons are made. The differences in winrates between the decks are way too low for you to ignore a 0,5-1% deviation.