Warlock shifts the meta in an unhealthy way

If you go follow your link, go to the deck, and filter the decklist’s results by T1KL Europe, the deck averages a 49.9% winrate. That’s tracker side, so the deck is about 48.5% winrate IRL.

I get that you’re pointing to something that actually happened, but this is an example of data that isn’t representative. It’s not even representative of the skill of the player being tracked. If we rewound time and put Norwis_ back at the rank he was before the run started, and we had him retry from there ten times, he would fail to do as well at least seven out of the ten times.

This is cherry picking. Which doesn’t mean lying, it means misrepresenting the normal by instead presenting truth of the exceptional.

:grinning: I had a feeling you’d say something like this. Well, to that, I’ve prepared an answer: you could do is a favour and post a direct link yourself — I just don’t use X and the like, and that particular channel happens to be in my bookmarks, as I use at as a reference and information source for myself. Couldn’t be bothered to dig to the original source. :grinning:

That’s the whole point of it.

Objective data is objective data, whosoever posted it (even if someone like you :grinning: ).

One could guess, but there’s no point trying to peek inside someone’s soul, especially online.

I’m really surprised how any of you could make anything out of the info that’s displayed on the website (maybe there are some obscure tabs that I missed, dunno): it’s absolutely unclear at which ranks those games were played; for example, in a bot-infested D5-D1, scoring something like 20-0 isn’t something outstanding.

Besides, since you’ve made me use my head here for once, let’s take a look at those numbers and partake in some maths on the level of bumpkins like Maw and Paw (since I’m not that well versed in the intricate differences between hayseeds, hicks, hillbillies and rednecks, you’ll have to excuse me if I miss a proper title; besides, Maw and Paw look kinda… zombie-green, so, I guess, the last-named option would be ‘greennecks’? :thinking: ).

Let’s assume a fixed probability of winning a single game p=0.5 — a little crude estimate, but it’ll do. The probability to win k games out of n is given by the binomial distribution, and, as is well-known, the mean value is equal to np, and the dispersion (or variance) is np(1-p), or, in this case, just np^2, thus the standard deviation (SD) is sqrt(n)*p. I can’t easily tell you the factor to multiply the SD in order to estimate the stochastic error in a given confidence interval for the binomial distribution (see also this; apparently, no maths gurus here to help me out), but suffice to say that the relative stochastic error (SD/Mean) is proportional to the inverse square root of n. Thus, for an order of 100 games, it’s the order of 10% (thus the most expected observed win rates would be within 45%-55% for a ‘one sigma’ confidence interval… which isn’t even that great — for example, about 68% for the normal distribution, which isn’t the case here, according to Wikipedia, and you’d want ‘three sigma’ to be rather sure — although it’s not always quite enough), for 10 000 — the order of 1% and so on.

With such a small sample size n, according to these estimates, it’s not even that impossible or improbable to have figures like that, although I’d like to see the confidene intervals for different values of those coefficients.

I’d really ask you how exactly you estimated that, be it nitpicking or not, but is there any point…