The effect of skill, quantified. DR278

That doesn’t make the stat in the op correct. That stat would look like it does specifically because of how it is calculated. It was going to give that answer no matter what because of how the parts are moved.

What VS never does is calculate bogus stats to represent it… they talk about trends in their descriptive data. It’s perfectly acceptable to use their data in the way they do.

I mean, definitionally, that’s exactly the meta becoming less favorable to it, though.

If by piloting differences you mean you stop getting bad opponents, that’s something that the data supports, but nothing directly measures piloting in this data specifically because of how it is collected and reported.

But not because of deck selection.

It’s piloting differences largely driving it. I get that you don’t want to call that “skill,” but it seems to me like it’s odd to avoid it when lacking any other reasonable label.

That’s part of it, but typically as you climb ranks you’ll be seeing better play on both sides of the matchup. Some decks tend to gain ground as this is happening, some lose it.

Control priest is a good example here. Some of it’s matchups are getting better even though it has fewer bad opponents. You’d expect that the matchup would shift closer to a 50:50 split, not shift further in favor of the control priest, which is what we observe.

So somehow the priest is finding more wins on average with better opponents, when both decks are the same in both the D4 and top 1k legend brackets.

That pretty strongly implies that there’s a skill difference. We can’t say exactly what that is with this data, but we can identify that there’s something major going on there that deck choices don’t account for.

1 Like

But a whole bunch of this is the T1K control priest is a different deck, made to face specific match ups.

The op’s stat can’t sort that out and the data doesn’t provide that information either.

Control priest is a great example of why the “skill” measure is just garbage.

No, I agree the data doesn’t tell us, but I disagree that there aren’t many factors in play here.

When VS makes a tier list, it’s a snap shot a very specific population. That doesn’t translate to other populations because of how it’s calculated, not because of “skill” or “deck choices.” All of those factors are in one pile in the data, and the data isn’t such where you can tease them out enough to know for one deck what’s different.

I think that post was childish as well.

Correct, I have limited time and reading loooong HS posts isn’t high priority.

Apologies.

This isn’t a game for me, I read it and replied. You’re right that it’s not reliable and I said much the same thing.

I’ve pretty much said everything I need to about the analysis. The sample seems insufficient, you can’t calculate an error margin, but the analysis has some probabilistic value. You’re welcome to disagree, but in doing so I believe you’re selecting a “skill” value at or close to zero, which is guessing.

Sure sometimes but I would say not true most of the time. Some debates are multi generational! Consensus takes time and effort.

2 Likes

Not true most of the time? I don’t see how one would stay employed for any length if you refused to accept accurate and correct feedback from one’s colleagues!

But the real question is how does one properly pronounce “.gif” if we want to have a multigenerational debate, ha.

1 Like

In business, my experience is that it’s extremely common for people to rely on bad, partial, or incomplete analysis… Usually because lack of action costs more than inaccurate actions, and/or the cost of analysis is prohibitive. It’s great to aspire to perfection but when you start getting 100 year return on investments for something that will be replaced in 10, it gets a bit silly. Probabilistic analysis is ok for most purposes.

And it’s gif not jif.

3 Likes

I knew you would say that.

The person who made the file extension stated that it is jiff, like the peanut butter, because of the ads for said product.

I feel Smeet got caught a bit speechless so allow me to assist in providing the words.

This entire theory of yours (Neon’s) regarding specialization is just garbage. No, you don’t make a deck perform better in general by changing the decklist to focus on specific matchups. It just doesn’t work.

If you actually check the matchup winrates, the only matchups where Control Priest loses more than 1% going from D4-1 to T1KL are Hound Hunter, Miracle Rogue, Secret Rogue and Chad Warlock; of these, only Hound Hunter is over 3% shift. Meanwhile it gains more than 1% (more than 3% if bold) in Blood DK, Unholy DK, Relic DH, Aggro Druid, Arcane Hunter, Pure Paladin, Undead Priest, Mech Rogue, Control Warlock, Curse Warlock, Control Warrior and Enrage Warrior. (Note that in this context a matchup going from very unfavorable to just plain unfavorable is a gain.) This isn’t doing much better against a narrow niche, it’s doing significantly better against a broad swath of the meta, regardless of whether you’re looking at D4-1 or T1KL deck popularity.

Like literally every sentence I quote here from Neon is just as untrue as possible. T1KL control priest is not more of a specialist, it is better than D4-1 control priest generally — this SHOULD be obvious because overall (aka general) winrate goes up, but y’know, just to state the obvious. No, the stats CAN show that, and we don’t even need to modify them from what VS publishes. And no, it’s an example of how matchup winrate can be a much bigger factor than meta.

False. At Vicious Syndicate, overall winrate is created from TWO piles of data per meta — matchup winrates and deck popularity. Overall winrate is matchup winrates times deck popularity. For the purposes of overall winrate calculation, deck popularity can be factored out, or more accurately refactored using the deck popularity of a different meta.

1 Like

Pity they misspelt it :slight_smile:

1 Like

This whole debate boils down to the op isn’t a stats person, but they read a wiki that they think makes them an expert. It’s not the math, it’s the theories behind the math and the theories behind the data that are killing the analysis.

There is widespread agreement that the number they made up is mostly if not completely useless, but I’m the only one willing to say it plain terms.

The fact is every post of theirs I have read in this thread accented their lack of knowledge and made it worse. I can’t just keep telling them they don’t understand because they’re convinced they’re the expert… despite a complete and utter lack of education, training, or expertise.

I don’t know what to do with that other than laugh at it and then ignore it.

Concepts like sample variance, population variance, standard units, regression towards the mean, weighted averages, and how data is collected are just lost here. It’s not the math of how they are made, it’s a fundamental lack of a conceptual framework.

Eh, I agree that the number has an error bar to it.

I don’t know how wide that bar is, but I doubt that it’s so wide as to say nothing.

And the only reason you would calculate this is to know… and you don’t know.

It’s something we already know, so the quantification should give us more know, not same know… and this is same know.

That’s pretty useless.

Again, I’m the only one willing to tell him his cardboard car and the pot on his head have not made him a grand prix racer.

I don’t think you need to be a stats expert to conduct some analysis. And regardless of whether you actually know what you’re talking about (name dropping methods without context screams student or recent student), learning something and then acting like people who haven’t learned that thing are inferior is very eye roll.

1 Like

Nope, one does not. But doubling down on stupid when you’re called out by people with expertise is what this is.

Because you’ve admitted to not actually reading this thread, you should be aware that I have discussed each one of those things in this thread with context and every one of them was hand waved away as though I was the one lacking knowledge.

You white knighting their utter lack of comprehension is more eyeroll.

And don’t bother replying because I have no interest in your additional contributions

I have no doubt that you spend most of your day eye rolling at people.

1 Like

Nope, just you and your buddy. I surround myself with competence, so this place is odd to me.

1 Like

Spot on.

For example, regression to the mean is the concept that imperfect correlations express that imperfection by becoming more average. For example, let’s say that you’ve got someone with a heritable trait that measures 143, the mean for that trait is 100, and the correlation for that trait between parent and child is 0.42. If so, the average trait measurement for that person’s child OR parent is 118 (regression to the mean, like correlations in general, are a two-way street, much to the consternation of racist midwits). This is above the mean but not as high above the mean as the original individual, hence the term “regression” — but it also applies below the mean. For example, if the trait measurement was 57, then the expectation for parents or children would be 82, which is more of an ascension to the mean.

Now I would like to think that I understand this concept just fine, but I don’t see how it has any applications whatsoever to the topic of the opening post. I’m not trying to establish a correlation between, say, how popular a deck is and its winrate. I imagine that such a correlation could be established, but it’s simply not what we’re talking about. We’re just talking about winrate here, and how the two components that define winrate, popularity and matchups, effect it.

All Neon is doing here is throwing around buzzwords that have nothing to do with the situation to come off smarter than he is.