Being able to calculate the trajectory of ball and predict exactly where it will land is the same thing, according to you, as a loose correlation between ice cream and drownings?
Sure, Newton’s laws don’t work in all scenarios (singularities for example) but they work in 99.99999999999999% of scenarios and thus have an extremely high correlation and thus are generally considered to be casual.
If you put Newton’s laws in the same category as correlations between ice cream and drownings then you’re a math/science denier, exactly as I’ve been saying. You could not predict if someone is going to drown or not based on whether they eat ice cream, sorry.
Your math didn’t calculate the trajectory of a ball. Your math guessed at a bunch of variables and then tested to see if your variables got the right trajectory afterward. It is possible that your variables are wrong, but still managed to mimic the right trajectory. There is more than one set of variables that could correctly show the trajectory of a ball.
There is the same double standard I’ve been pointing out repeatedly. If Newton estimates the gravitational constant and drag constants by comparing the output of his math to the real world, it’s “proof”. When I estimate ladder variables by comparing the output of my math to the real world, it’s “guessing” and “assuming.”
You are equating testing a hypothesis to making an assumption and that is math denial. Testing a hypothesis is, by definition, not an assumption.
Newton’s equations won’t perfectly predict every ball that is thrown down to infinite accuracy. It’s only accurate within a certain domain, where the variables are know-able. There will always be a certain inaccuracy in measurements, for example, and in extreme scenarios, like extreme velocities or extreme mass, his equations give inaccurate results. They are merely a correlation - a VERY strong correlation, mind you, but a correlation none the less.
The better question to ask isn’t if this is a correlation - it’s if we are in the domain of knowable variables that the math can produce stable and accurate predictions.
I won’t claim to understand how Newton proved the theory of gravity. But I very much doubt he simply guessed at gravities acceleration, got it right, and then said “there we go, gravity proven!”
Lets use a simple example. I know that something applied 100 N of force to something else. I know that Force=mass x acceleration. I guess that the objects mass is 2kg and acceleration is 50 m/s. Hey look! It got the right answer! I’ve proven that the object must have weight 2kg, and been moving at 50 m/s!
I couldn’t be wrong, right? Because my applied variables equaled the final outcome, right?
Whats that you say? There are an infinite number of other possible ways to get that outcome? And my guessing the variables and getting the right outcomes doesn’t mean I’ve proven my numbers are correct?
This is how he did it. You create an equation that you think will replicate an observed behavior and then you tune the parameters to your equation to match the observations as closely as possible. If the parameters can’t replicate the observations, then the equation is wrong or incomplete. If you can match the parameters, you consider it correct until you run into an observation that your equation+parameters can no longer explain. So the testing is perpetual, aka it goes on for forever, but when a theory is heavily tested (which is just another way of saying “has a very high correlation”) it is generally accepted as fact.
And the difference is that if it is a false positive it will pass this one test, but not other tests. You can come to the right answer the wrong way a few times, but if something is able to reliably predict over a broad range of scenarios, it’s considered proven.
The entire domain of the SC2 ladder is, in and of itself, a wide range of scenarios. So, if an algorithm can model them accurately, it is proven.
And the difference is that if it is a false positive it will pass this one test, but not other tests. You can come to the right answer the wrong way a few times, but if something is able to reliably predict over a broad range of scenarios, it’s considered proven.
You only performed one test…and your results haven’t been repeated by anyone else.
So we’re back to my original point, you haven’t proven anything. You’ve only provided one possible explanation.
We already agreed you haven’t proven it. You’ve only taken the first step.
Not only are you conflating 2 significantly different types of assumptions, I never claimed my assumption was proven. I said it was a reasonable assumption based on my logical analysis. I still even agreed its just an assumption, and not proven.
We’ve…already been over this. Why are you now backtracking? We just agreed its not proof. It needs to be replicated by others and tested in multiple scenarios before we can consider it proof. Merely accurately predicting a known observation is not sufficient to consider your assumption proven.
If that’s what you believe then I’ve won the argument, because that’s not what you stated initially. You stated that being able to make accurate predictions does not equate to proof, which is borderline science/math denial.
No. I said your one, single accurate prediction isn’t sufficient to prove your hypothesis correct. I never said repeated accurate predictions can’t lead to proof.
That’s not true. You repeatedly said that any and all simulations are wrong, that any one of a thousand math algorithms can produce the same output, that you can tune an algorithm to make it say anything you want it to say, that being able to make reliable predictions does not equate to proof because “correlation does not prove causation,” etc.
All this claims are radical, borderline math denial. You’ve changed your position to this, which is MUCH more reasonable:
In other words, you backed down and I won the debate.
There were about a dozen predictions made in the OP, sweetie.
The algorithm does take into account new arrivals. This is implicit since players quit the game yet the algorithm maintains 100k players.
Once again your suspicions are radically detached from reality. Here is the full skill distribution:
https://i.imgur.com/2lIptmE.png
This chart is another way of showing the same data. Zerg is least in higher leagues due to imbalance pushing zergs down the ladder, zerg is least in lower leagues to due zergs quitting the game. Zerg peaks in the middle leagues, since that’s where new players start. Hence Zerg’s behavior across the entire ladder is explained using a small handful of parameters (3 performance biases, a small chance to quit the game upon low win-rate, a small chance to change race upon low win-rate).
You misunderstand the point of the simulation. The point is to replicate the ladder with the fewest possible assumptions. You blasted me for making the assumption that new players are equally likely to pick each race. Then you suggest I replace that with 3 assumptions (a chance for picking each race = 3 assumptions). The number of players for each race is a testable prediction of this simulation, not an assumption to ram in at the start. If you can explain the ladder distributions using only 1 assumption, rather than 3, it is preferable.
That’s not true. You repeatedly said that any and all simulations are wrong
You go back and quote for me where I said this. If its true you can throw it in my face right now. Find where I said this and quote it. I never did.
My only claim throughout this entire argument has been that your model, by itself, does not prove causation merely because it correlated with the end result. There was no backtracking, there was no bait and switch. There was no complete denial of predictive algorithms as a reliable method of reaching conclusions.
my only claim was with respect to your specific algorithm and the efficacy of its conclusions.
Am I lying now? Go prove me wrong. Go find where I said all predictive algorithms are bogus. Go find where I said its impossible to ever reach an accurate conclusion with a predictive algorithm.
Your inability to recognize my actual argument after repeated attempts, rewording, and even simple examples provided on my part, is just mind boggling.
But hey, at least you have that advanced math and CS knowledge, sweetie.
No, regardless if it finite or infinite domain mean can’t be higher/lower than the highest/lowest known data point. So you statement:
Is BS.
Mean is calculated base on the data collected so this mean that
Mean is relative to data.
You lost it.
You are trying to create a mathematical model of sc2 ladder. So you can’t pick a mean. You have to select it taking in to consideration real life example.
Sc2 ladder distribution indicates that 0 and 8k are not valid means.
Except we know the population nearly tripled since zerg were 30% (already under represented) or, if you want to take a point in time where the populations were already rather stable at about 500K players in May 2018, zergs and toss were 28.5% and terrans was 34%. This is data and should be your starting point if you want to have a simulation based on facts.
Which, like your first graph, is unusable.
To be able to use it we’d need to group it by range of percentage. If you want to compare it to Blizzard’s league we need number of players in each range 0_5%, 5_25%, …
I mean there is some ideas here but if one day you get to a profession where you need to present graph you’ll need to show graphs that are actually useful. Focus on the scales and on what you draw and why, what’s the point you are trying illustrate with a drawing. To be fair the first time people try to submit their paper they often have little clue as what to put and lots of rework before anything gets accepted. For a completely clueless person at least it shows you are trying.
You don’t explain with a Monte Carlo at best you can create a model that matches reality and possibly keep matching it. You could absolutely have curves better matching current ladder and only taking into account the temperature. If your curves did match the ladder (which they currently don’t, wrong starting point, no usable graph to show anything either and I still suspect you did not group leagues properly, etc) your hypothesis would still be debatable as the currently you assume much of the distribution differences in lower leagues is due to worst people leaving and therefore explain terrans dominating there by them being favored (which you denounced in a latter post when CheeseClown was not happy seeing your simulation needed Terrans to be favored to work the way you intended it to work).
And again, especially for CheeseClown and the other TCF who really wants to see protoss favored and agree with you on this simulation: It assumes Terrans are heavily favored. You are free to agree with the simulation but the graph above (unusable because no info on how many people would be in each “league” not to mention GM or professionals) is made assuming terrans skill is lower than equivalent MMR protoss.
Not that I see many people agreeing with Batz anyway but I felt it was worth mentioning.
You seem to have a really hard time understanding that the “starting point” is unknowable. You can’t pick a random spot in the SC2 ladder simulation and say it’s the “start”. You don’t know what the starting conditions are. They have to be solved for. That’s literally the point of this thread.
The first graph is grouped by percentage.
LOL
This is just a weekend funsy project kiddo. I appreciate that it was done well enough that you are trying to hold it to the standard of whitepaper research.
Please do. I’ll wait. Use the temperature to match curves to the SC2 ladder. It will be fun to see you try and fail to make it work. Only a person who is truly clueless about mathematics would make a claim like that.
I don’t assume. It’s a well known fact that people in the lower and upper leagues play the majority of games (hint: pareto principle). You’ve already admitted as much: