You may have heard that Google’s DeepMind, an artificial intelligence, has mastered Go. This is a big deal, because it’s hard to build a computer that’s good at games. In video games, there’s always one particular move that confuses the AI opponent: football games fall for trick plays over and over, racing games have AIs that don’t understand how to overtake other cars safely, and so on. Games are hard, humans are smart, and computers aren’t. Note that computers were perfectly average at traditional games like Chess for literally decades.
Sure, computers are great at chess now. Everyone knows that IBM’s Deep Blue supercomputer won a chess match against the reigning world champion Garry Kasparov, but that game was a rematch. The year before, Kasparov handily won his match against Deep Blue. The Deep Blue machine only won the rematch after literally doubling its computing power to improve its brute-force analysis of the outcome of nearly every possible move at once. Deep Blue was one of the the 250 most powerful supercomputers in the world at the time. A little more than a decade later, this underwhelming smartphone could run a chess program capable of trouncing all but a handful of players on the planet. Computers got way smarter in a hurry.
So what happened with this Go thing? Are we in the ‘supercomputer ekes out a win’ stage, or the ‘cellphone checkmates you in thirty seconds’ stage? And how do machines go from one stage to the other?