

The map is also much larger than an 8×8 or 19×19 grid, which means brute force methods of trying to predict every single move and making the best choice are simply inefficient.Īll these and other subtle elements have made StarCraft a huge challenge for AI algorithms. Contrary to board games, where every square is treated equally, in games like StarCraft, movement and performance of units changes based on factors such as the type of ground and elevation. StarCraft also provides a richer and more complex environment. It also means an enemy can sneak up on you through areas where you don’t have visibility. For instance, a player won’t be able to see what’s going on in their opponents’ base unless their units are actively attacking it. Even after revealing unknown parts of the map, they will only be able to see as much activities in areas where their units are present. Their view on the game map is limited to the areas their units have previously discovered. In contrast, StarCraft II is a real-time strategy game, which means players must make decisions simultaneously. Also, every player can always see the entire board and every piece at all times.

In chess and Go, each player waits for the other to finish before making their move.

In the following year, the company repurposed the same AI to learn chess and shogi (Japanese chess) with very little input from humans.īut all of those games have two common traits that limited their complexity: They are turn-based and give players perfect information. In 2016, DeepMind’s AlphaGo AI beat the world champion in Go, a Chinese board game that scientists thought would be beyond the capacity of artificial intelligence for decades to come. The challenges of teaching AI to play StarCraft II Here’s what we know about AlphaStar and why its achievement is important. In this regard the mastering of StarCraft II is a milestone for several reasons. There have already been progress in several single- and multi-player games such as Mario, Quake, Dota 2 and several Atari games. More recently, video games have become an area of focus for AI researchers. In the past years, AI researchers have managed to master board games such as checkers, chess and the ancient game of Go. Games have historically been a testbed to evaluate the efficiency of AI algorithms. Called AlphaStar, the AI clinched decisive victory against two grandmaster players in a series of matches played at DeepMind’s headquarters. This article is part of Demystifying AI, a series of posts that (try to) disambiguate the jargon and myths surrounding artificial intelligence.ĭeepMind, the artificial intelligence company best known for developing AlphaGo and AlphaZero, revealed on Thursday that it had created an AI that could play famous real-time strategy game StarCraft II well enough to beat some of the best human players in the world.
