NEW YORK — A computer program has beaten a human champion at the ancient Chinese board game Go, marking a significant advance for development of artificial intelligence.
The program had taught itself how to win, and its developers say its learning strategy may someday let computers help solve real-world problems like making medical diagnoses and pursuing scientific research.
The program and its victory are described in a paper released Wednesday by the journal Nature.
Computers previously have surpassed humans for other games, including chess, checkers and backgammon. But among classic games, Go has long been viewed as the most challenging for artificial intelligence to master.
Go, which originated in China more than 2,500 years ago, involves two players who take turns putting markers on a checkerboard-like grid. The object is to surround more area on the board with the markers than one’s opponent, as well as capturing the opponent’s pieces by surrounding them.
While the rules are simple, playing it well is not. It’s “probably the most complex game ever devised by humans,” Dennis Hassabis of Google DeepMind in London, one of the study authors, told reporters Tuesday.
The new program, AlphaGo, defeated the European champion in all five games of a match in October, the Nature paper reports.
In March, AlphaGo will face legendary player Lee Sedol in Seoul, South Korea, for a $1 million prize, Hassabis said.
Martin Mueller, a computing science professor at the University of Alberta in Canada who has worked on Go programs for 30 years but didn’t participate in AlphaGo, said the new program “is really a big step up from everything else we’ve seen…. It’s a very, very impressive piece of work.”
The ancient Chinese game of Go is one of the last games where the best human players can still beat the best artificial intelligence players. Last year, the Facebook AI Research team started creating an AI that can learn to play Go.
Scientists have been trying to teach computers to win at Go for 20 years. We’re getting close, and in the past six months we’ve built an AI that can make moves in as fast as 0.1 seconds and still be as good as previous systems that took years to build.