The rank isn’t even logistic it doesn’t even have a distribution, the rank is the point estimate for the mean of the performance distribution for any one player
If you want to actually inform yourself, here is a pretty thorough overview of the Elo system from the chairman of the US Chess ratings committee. http://www.glicko.net/research/acjpaper.pdf
T-test is a test to see wether two different samples come from the same sampling distribution, there is a specific statistical context to the word here that may have gone over your head.
By his definition the vast majority of statistics is wrong. He’s arguing that because different things are different you can’t compare. But that’s literally what statistics is about. He’s rejecting the foundation of statistics.
If PvP and PvZ can’t compare because they are fundamentally different matchups, then people taking drug A and people taking drug B can’t compare because they are fundamentally different drugs. Therefore, you can’t measure which drug is the best at treating heartburn.
What he is saying is patently absurd but I am trying to be patient with him.
I’m just having a giggle now. Like, he is basically trying to argue that logistic regression techniques are not logistic. And when it doesn’t work he builds a strawman the size of the moon so…
Yeah, quite a nice giggle.
A single point does not have a distribution. A random variable has a distribution.
For example, if you roll a die and get a 6, the 6 is just the result of a roll its a single static point. The possible outcome of a roll of a die has a distribution though (uniform in this case).
Both me and nomufftootuff have a graduate education in the field friend =), and I’m not even trying to reason with you at this point, i’m just pointing out the severe flaws in your understanding of elo/statistics so people are aware of them.
Nomufftotuff didn’t understand the difference between a hypergeometric and binomial distribution. He took issue with the fact that my grandmaster models used the binomial distribution. He said it was hypergeometric. I had to explain to him that the two are virtually identical and that when the draw size is sufficiently small the binomial distribution is easier to work with so is preferable. He was very unhappy. He acted a lot like you are acting. Then there was that time we were talking about stats and nomufftotuff, who is supposedly a professor of stats for Berkley, came to your rescue at 3 AM in California on a weekday. You are nomufftotuff.
Batz they dont care about facts or logic. They saw yestsrday ZvZ Europe finals so histerie is starting again from begin. (No matter that NA was PvP… not relevant)
It’s here for those who do care and want to discuss. It’s a given that it will make a lot of people mad since balance is such a heated topic and data as definitive as this is a formidable threat to their closely-held opinions.
I’ve always enjoyed much more the “engineering” approach to finding solutions. Who needs the exact solution when you can get a very acurate approximation with an investment of one-tenth of the resources.
I think the tipping point for me was fluids. Once you start dealing with non-perfect situation you basically say “screw it” and start making assumptions to simplyfy the problem as much as possible. Either way I digress, the point I wanted to make is that compared to the ridiculous assumptions we make with that, assuming that there are so many people that picking the same one is extremly improbable to facilitate working on it is actually amongst the first things I’d do. I guess I’m a lazy person but hey, its less time.
Yep. The math becomes very complicated very quickly so it’s a question of time but also a question of error. The more complex a system is the more likely an error. So, do you achieve more accuracy with simpler math that uses approximations? Usually that is the case.
In computer science, which is my background, more complicated systems are much more difficult to maintain and improve and are slower to run. When it comes to simulations and data processing, being able to squeeze a few optimizations out of a chunk of code that will run repeatedly millions of times makes a big deal. In other words, you have basically no choice but to make loads of approximations.
Being lazy is good in a sense. Was it Bill Gates who said that if he wants something done he gives it to a lazy person? They always find the easiest way to get the job done which equates to optimizing the heck out of the process.
This is exactly why data scientists with CS backgrounds tend to make major errors, you guys just throw model after model until it gives you the result you want instead of understanding the assumptions/theory behind a model to pick the best model. Which is exactly what is happening in this thread.
I can see that happening. It depends on the CS courses honestly and there’s a lot of kids who learn python and think they are god but have no understanding of the algorithms and how they work but can invoke them in code well enough. You’re not one to talk since the course material you’ve linked to is exactly the stuff you are bashing.
Mate, if you don’t believe me then go and ask other people, start with the bias topic so you can see what im talking about assuming you truly don’t believe me.