I try to imagine the dialogue of Deepmind

AlphaStar: reaches GM with all 3 races
Deepmind’s marketing team: Success! It has MASTERED SC2!
Also AlphaStar:

https://i.imgur.com/NXFhx4H.png
https://i.imgur.com/NniBzfy.png
https://i.imgur.com/HfU2QHq.png

These are only a few of dozens of examples. This thing butchers the basics of the game, but it has nice micro and amazing macro. I’ve done a fair bit of AI development and I am not impressed with this thing and here is why. Macro and micro are very easy things to model. There were AIs programmed by teenagers with nice micro/macro. The point of creating a self-learning AI is to tackle problems that are difficult to model. A simple question like “When do you drop mules?” is very difficult to answer because it depends on the situation and understanding situations and how they evolve into future situations is very difficult.

Yet, here it is dropping mules on a planetary that is going to die. That means its ability to predict even near-future outcomes is non-existent. It couldn’t tell the planetary was going to die and that it shouldn’t drop mules yet that would be clear to a human before the battle even started. There are infinitely more difficult questions in SC2 than “Will this planetary die?”, yet here it is dropping mules on a dead planetary.

I think it is pretty clear that this AI wins games on merits that AIs are already good at, while it struggles with basics of the game. It builds factories in the middle of nowhere, depots blocking its mineral line, throws its reapers away with NO regard for how important the first reaper is when scouting, and this list goes on and on.

These are huge problems from a game-understanding perspective. If a Terran throws his reaper away, creep gets a huge boost and early game liberator/hellion harass will fail, and mid-game tank pushes won’t work. This fast-tracks the zerg to hivetech. This AI doesn’t understand this. It has very limited game understanding. It doesn’t understand how one situation evolves into the other. It just spams units and has nice micro.

I think the problem here is that the Deepmind team isn’t imposing the right restrictions on the AI. Since macro/micro are so very easy to model, their bots will have a natural inclination to try to win the game on those grounds, while ignoring the more complex aspects of the game. The way you work around this is to impose restrictions on the bot. You could limit its APM for example and that would restrict its reliance on micro. They have added some restrictions like this, but I think it is clear that they haven’t added enough.

First, the micro restrictions I think are sufficient with the exception of focus-fire. It has perfect focus-fire. Having imperfect focus-fire is a restriction that humans have to deal with. Sometimes even Maru’s hands get sweaty.

Secondly, I don’t think the restrictions on APM are sufficient to limit macro. Macro requires very few APMs. Macro is more about timing than APM. Bots can and will nail perfect timing every time. Add a timing restriction to the bot’s macro.

Third, the bot can select units on the edge of the screen. Humans can’t do that because it would cause the camera to scroll. Add that restriction. If it tries to select a unit on the edge of the screen, it scrolls and mis clicks.

These are the problems that I see. The goal of a self-learning AI is to model difficult scenarios to answer difficult questions. Their bots are naturally inclined to win games on the strengths that bots already have, yet that isn’t the goal of a self-learning AI, and it impedes the bots ability to focus on what it is supposed to. The restrictions they add try to incentivize the bot to win on other merits, but their restrictions are insufficient and the massive advantages that computers have in APM and precise timing bleed through. More aggressive and strategic restrictions are required to force their bot to win games on the basis of game-knowledge / strategy. Until this happens, their bot is only marginally more impressive than traditional AIs.

14 Likes

To be fair…

I have seen high level Terran, even pro players, occasionally drop MULEs at a dead base.

The factory placements was derpy, but it didn’t get completely stuck, and frankly it coped with the lack of space better than most metal league human players, which is a lot of humans.

Even in mid-masters players occasionally screw up their reaper micro. Again, alphastar is outdoing a lot of humans.

These examples show that calling it “better than 99.8% of players” is a gross overstatement of the AI’s merit, but it doesn’t have to beat the best humans to be significant. It just has to beat an average human. And even in terms of decision making, it’s making real progress towards that marker.

1 Like

Until it can trash talk and post alt right edgelord memes like Tay from microsoft i remain unimpressed

3 Likes

Starcraft is first and foremost a game about mechanics. If you want strategy they should teach it to play xcom2 on the hardest difficulty.

2 Likes

Yes but the problem is that computers have a definitive advantage in mechanics simply because they are computers. If an AI is supposed to win the game on other merits, it needs limitations on its mechanical abilities so that it is forced to focus on what it is supposed to focus on.

Here is a comparison. A computer can do millions of simple addition tasks a second. It will beat every human at speed math which puts it in the top 99.99999999999999999999999% of humanity in performance. Saying that computers are smarter at math on this basis is misleading. Doing things fast does not equate to having a mastery of mathematics.

SC2 is like this. The bot can play fast and by that merit it can beat 99.8% of human players. This in no way, however, translates to having any mastery of SC2.

Computers have other advantages such as precision and timing. A computer can hit every click down to the pixel. It can also hit the timings of events down to microseconds. Both of these will increase its performance at SC2 while having absolutely no meaning when it comes to having mastered SC2.

2 Likes

I don’t disagree on a fundamental level but 90% of the strategy in sc2 is trying to decide where to dedicate your limited amount of time/attention.

To a lesser degree it also involves trying to figure out what your opponent is doing and lastly figure out what you want to do.

That isn’t true at all. You have unit comps, build orders, the shape of the map, unit positioning, how unit positioning morphs in battles and during posturing, economy, production, fog of war and build ambiguity, the list goes on and on. Just a single battle has more possible outcomes than the entirety of chess. Your ability to predict what is going to happen given X situation and Y action gives you the ability to steer the game down a path that maximizes your win-rate. You can’t do that without an understanding of all these systems and how they work, how they interact with each other, and how they evolve over time both individually and as a group.

You can take one situation and change ONE parameter even a minuscule amount, and radically rewrites everything about the game. You make one change to your build order and it can make the difference between defending an attack or not. If you can manipulate these variables better than your opponent can, you have control over where the game is headed and can use that to win. This is what strategy is.

This bot doesn’t have strategy. It can’t make even basic predictions that are blatantly obvious to humans like whether it’s a good idea to build a factory in the middle of nowhere. That’s a no-brainer for a human because humans understand positioning and how hard it is to defend production that is vulnerably placed. That’s a system within the game that the AI doesn’t comprehend even in the slightest.

1 Like

I just watched a game where it sent 20 marines out to attack a protoss but didn’t include a medivac. It overstimmed all of them running from zealots and they died. These bots are bad. It may have achieved a peak of GM league over a small number of games, but I guarantee that they would not be able to maintain that rank if they played a large number of games because there are truly tremendous problems in its play that could be exploited. It isn’t exploited because a human would never make these mistakes so people don’t cater their play to prey upon the mistakes that the AI makes. These problems exist because the bot has no comprehension of the strategy of the game, and if players were able to use strategy against the bot it would perform horribly.

2 Likes

That game can be solved relatively easily. Especially since every mission is against scripted AI, and almost every mission is a re-skin of the last, and sometimes isn’t even a re-skin. Just a repeat.

2 Likes

No it’s not. You can have amazing mechanics but if you don’t know what you’re doing it won’t matter.

1 Like

I mean… You can get to gm by cannon rushing or using marine drops.

A bot (not alphastar) can do this Automaton 2000 Micro - Marine Split Battle vs IMMvp - YouTube it doesn’t matter what strategy you have if you have good enough mechanics. A bot doesn’t think and never will, it only recognizes patterns that even humans can’t see or notice.

2 Likes

I just picked that because that’s the first game I thought of… The AI would have to learn to be more strategical and learn about positioning and such…

The enemy ai is downright evil on the hardest difficulty.

I found another one: https://i.imgur.com/AOtEHK9.png

Here is another one - it losses vs a masters player because it is too dumb to make an observer vs lurkers:

If every masters-level zerg went lurkers vs alphastar, it would have a 0% win-rate vs masters level players which is far from the 6k mmr they claim that it has.

That’s not true. You can have the absolute best mechanics but if your strategy is to build nothing but defensive zealots from a single gateway then you will lose.

Can you stick to calling Deepmind a failure instead of talking about balance. You’re a lot better at this.

Tay is bae, RIP.

2 Likes

Your being pretty nitpicky to try to bash on people who are obviously much smarter than you are.

The whole point of the AI was simply to ‘match the skill’ of pros, and it is obvious that they’ve been successful at doing since the AI can consistently take games off of pro players.

They never claimed they were trying to make an AI to play the game perfectly, so why does it matter if it loses some games as long as its successful in its stated goal?

Read:

DeepMind Technologies’ goal is to “solve intelligence”, which they are trying to achieve by combining “the best techniques from machine learning and systems neuroscience to build powerful general-purpose learning algorithms”. They are trying to formalize intelligence in order to not only implement it into machines, but also understand the human brain, as Demis Hassabis explains: " … attempting to distil intelligence into an algorithmic construct may prove to be the best path to understanding some of the enduring mysteries of our minds."

This is their goal. This AI does not achieve this goal.

You are being oversensitive to basic scientific skepticism.

I never claimed what you purport.

3 Likes
other agents in complex environments. As a stepping stone to this goal, the domain of
StarCraft has emerged as an important challenge for artificial intelligence research,
owing to its iconic and enduring status among the most difficult professional esports,
and its relevance to the real world in terms of its raw complexity and multi-agent
challenges. Over the course of a decade and numerous competitions1–3
, the strongest
agents have simplified important aspects of the game, utilised superhuman capabilities,
or employed hand-crafted subsystems. Despite these advantages, no previous agent has
come close to matching the overall skill of top StarCraft players. We chose to address
the challenge of StarCraft using general-purpose learning methods that are in principle
applicable to other complex domains: a multi-agent reinforcement learning algorithm
that uses data from both human and agent games within a diverse league of continually
adapting strategies and counter-strategies, each represented by deep neural networks5,6.
We evaluated our agent, AlphaStar, in the full game of StarCraft II, through a series of
online games against human players. AlphaStar was rated at Grandmaster level for all
three StarCraft races and above 99.8% of officially ranked human players```
https://storage.googleapis.com/deepmind-media/research/alphastar/AlphaStar_unformatted.pdf

Did you even read the paper?

Nothing you listed contradicts my position - did you even read my posts?

2 Likes