I try to imagine the dialogue of Deepmind

Um, trying things and then learning what works and what doesn’t is how an AI learns. And there’s many orders of magnitude more wrong things to do than right things. So it will try a lot of wrong things before learning the right things (things that lead to wins). Showing it makes a mistake means nothing. Showing it makes that exact same mistake over and over without learning that it doesn’t work would mean something.

Someone who is actually familiar with even the very basics of AI development should know this.

1 Like

What you listed, once again, does not contradict what I said. Linking random quotes from their paper doesn’t help your argument.

I’m confused, are you implying that this method of sampling is biased somehow? Or do you just not understand how sampling works?

I’m done talking to you until you bother to read and understand what I am saying, because it is abundantly clear that you are making zero effort to do so. Either that or you are a troll.

You are the one claiming that their sampling method was flawed “because it didn’t include enough lurkers”, which its not because their sampling method was pretty on point.

See, there is the exact problem. I detailed exactly why their sampling method isn’t “on point” and you contemptuously threw it aside without addressing it. Linking me to the part of their paper that states that their selection was randomized does not refute what I said about their sample being too small. You didn’t read what I wrote. Either that or you are a troll quoting random snippets out of their paper.

They literally explained how they sampled which shows that its about randomized as you can realistically expect.

What you are suggesting is something along the lines of repeated sampling until you get a sample that YOU are happy with.

That is 100% dumb, wrong, and unethical.

No, that isn’t. They make randomized changes and infer that the set of changes are better than the previous set when it increase the win-rate. The problem with this is, again, that the win-rate of a strategy will fluctuate depending on the meta, so their metric for justifying what is “better” is biased because what is “better” is dependent on the scenario.

Proof, please. Make an argument as to why it “means nothing.” I think the fact that it can’t predict a base is going to die and drops fresh mules is pretty telling about the bot’s ability to predict future game-states and as such mistakes like this have a lot of meaning.

I made an argument about their sample being too small. You linked me to a part of their paper talking about the randomization of the sample. I told you that I wasn’t talking about the randomization. You then say the randomization proves me wrong again. Listen kid, I have better things to do with my time than to engage in this charade. When you have a real argument, feel free to post it. Until then, I am assuming you are trolling (I pay you the compliment of assuming you aren’t actually this dumb).

@Gabriel, he is literally contradicting himself.

He goes from saying that win-rates fluctuate so 100 games is not a good enough sample size to claiming that a couple of games is “proof” that its not GM level.
The amount of mental gymnastics going on here is extremely fascinating.

Which you could easily show if you wanted to. You know basic stats right?

That’s the same as what I said. Making changes and trying them is trying things and seeing what works.

You said so yourself. It makes “randomized” (they aren’t actually necessarily random, btw) changes and try them. This means that sometimes the randomized changes are bad things. More often than not they are bad, actually. Normally, only a few are beneficial. Which means mistakes are an expected part of the learning process. In other words, making mistakes is literally an integral part of getting better.

And it is clear to this published developer with multiple tech patents that you haven’t the faintest clue about AI development.
You are talking nonsense in circles.

1 Like

Yea this guy likes to talk big, he claimed to work w/ big data or data science or something like that but … he doesn’t know how to use python/R for data analysis LOL.

Uh, no, there is a huge difference between what you said and what I said. It doesn’t “see what works”, it correlates changes with an increase in win-rate and assumes that that is better. I argued that it can’t make that assumption because estimating a performance based on a win-rate is inherently biased when win-rates change depending on the composition of opponents. A win-rate can be high not because it’s good but because its opponents are bad, for example.

What you are saying doesn’t refute my argument. I am not saying that the AI isn’t improving, yet you are talking as if I did. I said the AI hasn’t achieved any remarkable goal as of yet. Learning how to micro and macro is not impressive because they are some of the simplest aspects of the game - aspects that traditional AI algorithms can tackle with ease.

Sorry to burst your bubble, you are very wrong. I personally know for a fact just how much experience I have, and I think the evidence in this thread is quite telling that I am very knowledgeable on the subject. For example, I have corrected you twice now on how bots, such as these, actually learn.

You have posted verifiable nonsensical tripe form your first post. Any undergrad who has taken even an Intro to AI could tell that from your posts. I assure you 100% that’s true.

1 Like

Aka you have no argument and have resorted to personal attacks since you lost the debate.

:slight_smile: You said I am wrong. I replied you are wrong. If my post was a “personal attack”, then yours was first. Durrrrr. That’s obvious.

The fact you are trying to discredit someone with a false and utterly silly accusation of “personal attack” shows I am correct about you not knowing anything about AI development.

1 Like

Oh boy, on to denying the facts. You’ve lost the debate really hard.

You claimed my knowledge was less than that of an undergrad taking his first AI. I simply pointed out that was quite ironic since you’ve stated several things that are blatantly wrong, which I corrected, and which your promptly ignored. You didn’t dispute my statements which is an admission that they were correct. So you admit that you were wrong, I am right, while you attack me personally and claim I am not knowledgeable on the subject. Now you are claiming that I am attacking you personally when all I am doing is defending the personal attack you used on me.

lol

Anyway, for anyone else reading this thread. I absolutely assure you that making odd looking, even extremely odd looking, mistakes is a normal and expected part of the AI learning process. it doesn’t mean something is wrong. They aren’t coded up being good at something right out of the box. You can very safely ignore someone who things a single example of a mistake is meaningful. If it makes the exact same mistake over and over and continued losing because of it, then something is wrong.

1 Like
  1. You are once again misrepresenting my argument (again - I have corrected you on this point twice). I didn’t make a remark as to if the AI is learning/improving. I remarked as to its current status.

  2. Again, you make a claim without backing it up. I have detailed why these mistakes are important in showing that the AI hasn’t accomplished anything remarkable, and sweeping these mistakes away as unimportant while providing zero justification for why that is just makes you look petty. Pointing out that mistakes are part of the learning process doesn’t refute the fact that the mistakes are very telling about its current status. The AI’s status is unimpressive because of these mistakes, and pointing out that mistakes are part of the learning process doesn’t change that fact. Its current position in the learning process is unimpressive, which begs the question of if it is even capable of reaching a status where it is impressive.

When you have to misrepresent someone’s argument in order to attack it, all you are doing is admitting that their argument was too strong for you to address the actual argument.