Go match — final reflections

by admin on March 17, 2016

My ISP had server problems the last couple days, so I wasn’t able to post in a timely fashion and the “news” is no longer news. The match between Lee Sedol, the top human go player, and AlphaGo, a new computer program developed by Google, ended in a 4-1 victory for the computer. As I said in an earlier post, I’m a complete novice at go, but it’s been fun to follow the commentary at Go Game Guru. I’ve seen the experts go through the same stages that chess masters did 20-30 years ago against computers, confidence followed by disbelief followed by reconciliation.

AlphaGo took all the suspense out of the match early, racing to a 3-0 lead. David Ormerod at Go Game Guru said that the third game especially gave him a “sick feeling” in his stomach, because Sedol played just the kind of game he’s good at, didn’t make any obvious mistakes, but simply got outplayed by the computer. This game made it clear that there weren’t going to be any quick fixes for the human.

In the fourth game, with the match already lost, Sedol seemed to play with less pressure on him. I don’t understand any of the details, of course, but the 78th move (his 39th) was apparently a big gamble that very few people saw coming — a little bit like an “all in” in poker. And the computer didn’t see it coming either. It lost its grip on the position, and then something very interesting happened. When the computer realized that its position was bad, it started playing very poor moves. The speculation is that AlphaGo is programmed always to maximize the probability of winning. As that probability went down and down, it had to “hope” for more and more unlikely blunders by Sedol, and so it played more and more unrealistic moves. If so, this is a definite weakness for AlphaGo. If you get it into a position where it has less than a 50 percent chance of winning, it will start to thrash. The only problem is that it’s damned hard to get it into a position where it has less than a 50 percent chance to win. Sedol was only able to do it once.

After the fourth game the human fans could at least have some hope that Sedol had spotted a way to take advantage of the computer. If he won the last game, there would be some grounds for believing he might have prevailed in a longer match.

But the last game extinguished any doubt. Sedol again seemed to be affected by the constant pressure from the computer, playing moves that were a little bit more conservative than his usual style. This is something I’m very familiar with from playing chess against the computer. I see a line that looks better for me, which I wouldn’t hesitate to play against a human, but I also see that it leaves me with a couple loose pieces, or maybe it opens up a file for the computer. Even though I don’t see any concrete threats, I shy away from that line because I’m afraid of what might happen. That fear is a difficult emotion for a human to overcome, and it’s one of the things that ultimately makes playing against a computer frustrating. You know you shouldn’t play so cautiously, but you also know from experience that if you don’t, the computer will eventually get you.

So the match ended 4-1, and there was really no doubt that the computer was the stronger player. Ormerod pointed out that it’s cool that we are still at a stage where the top human can beat the computer, if he’s lucky and the stars align. In chess, the era when we can be competitive has passed. The “roughly equal” era will probably not last very long in go either, but we can anticipate some good computer-versus-human go matches while it lasts.

A couple other details. First, one thing I learned in my interviews before the match but didn’t see reported anywhere was that besides the $1 million prize for the match, there was also a $25,000 prize for each game won. So Sedol did get $25,000 for his nice victory in game 4. A good day at the office.

Finally, I would really like to see somebody program a chess computer with the same “deep learning” algorithm that made such a leap forward in the game of go. I think that there is reason to believe that such a program could improve upon the best current computer chess programs, Komodo and Stockfish. Perhaps you could even reach the ultimate goal — a chess program that literally could never be beaten. You would never be able to prove that it was unbeatable, but it might be empirically unbeatable. If so, this would be the end of “centaur chess.” Also, it would be trouble for correspondence chess, which would have to go to an honor system to survive. Currently, players in the world correspondence championship can, and do, use computers. The draw rate is very high, though (85 to 90 percent in the last few world championships). If an unbeatable machine came along, the draw rate would hit 100 percent and correspondence chess with computer assistance would become a pointless exercise.

However, to end on a more positive note, I think that AlphaGo has been good for the game of go, at least in the short run. It has brought media attention to go in the western world that it has never gotten before. I also think that the advent of computer programs as strong as any human will democratize the game, as it has for chess. Currently, to become a go professional you have to go to a go school and pass through a rigorous training process. With computer training available, people might be able to reach professional level in other ways.

I think it’s no accident that the chess scene has gotten younger since computer chess programs got good. The new generation has learned to use those programs effectively. And chess knowledge has definitely gotten more easily available. So I hope, and expect, that go will see an upsurge in interest, especially among younger players. It’s possible that a few old-timers will give the game up, but I think they will be greatly outnumbered by the players who will find that the barriers to entry are lower than before.

 

Print Friendly

{ 6 comments… read them below or add one }

paul b. March 17, 2016 at 1:36 pm

“Even though I don’t see any concrete threats, I shy away from that line because I’m afraid of what might happen.”

Exactly the problem: inhibition stifles creativity. I play on the German site Chessplay.com both as a paying member and as a free guest. As a member, every game affected my rating and I was afraid to take risks. Playing as a guest, I don’t have that inhibition; I can play experimental lines without fear of losing. When I adopted Dana’s Kings Bishop Gambit as my favorite opening, I honed my repertoire as a guest, then switched to rated, member games. I kept a stroke count of my rated wins and when my subscription expired after a year I was +47 with the KGB and +17 with the Morra Gambit, which Dana suspects is flawed but which eliminates the need to study endless lines of the Sicilian. I’ll re-subscribe to Playchess soon, but right now I’m having so much fun playing as a guest and taking chances over the board without obsessing about my rating.

Reply

paul b. March 17, 2016 at 1:37 pm

“Even though I don’t see any concrete threats, I shy away from that line because I’m afraid of what might happen.”

Exactly the problem: inhibition stifles creativity. I play on the German site Chessplay.com both as a paying member and as a free guest. As a member, every game affected my rating and I was afraid to take risks. Playing as a guest, I don’t have that inhibition; I can play experimental lines without fear of losing. When I adopted Dana’s Kings Bishop Gambit as my favorite opening, I honed my repertoire as a guest, then switched to rated, member games. I kept a stroke count of my rated wins and when my subscription expired after a year I was +47 with the KGB and +17 with the Morra Gambit, which Dana suspects is flawed but which eliminates the need to study endless lines of the Sicilian. I’ll re-subscribe to Playchess soon, but right now I’m having so much fun playing as a guest and taking chances over the board without obsessing about my rating.

Reply

brabo March 17, 2016 at 11:31 pm

“Finally, I would really like to see somebody program a chess computer with the same “deep learning” algorithm that made such a leap forward in the game of go. ”
That already exists. Several articles were published about such program last year.
https://www.technologyreview.com/s/541276/deep-learning-machine-teaches-itself-chess-in-72-hours-plays-at-international-master/
Personally I think chess has much more benefit of brute force calculations. The smallest detail often changes the complete evaluation (butterfly effect). I wrote about that several articles on my blog. 1 example: http://chess-brabo.blogspot.com/2014/05/the-einstellung-effect.html

Reply

admin March 18, 2016 at 10:20 am

Hi brabo, You’re right! I had heard about Lai’s program last year but didn’t realize that it was a deep learning algorithm, like AlphaGo. I still think that some combination of deep learning with brute force should be able to outperform the previous state of the art in chess programs.

One thing that I think would be interesting about a deep-learning based chess program is that instead of evaluating a position in terms of pawns (e.g., White has a 0.73-pawn advantage) it would evaluate it in terms of winning probabilities (White has a 60 percent chance of winning, 30 percent chance of drawing, 10 percent chance of losing). This would not only be a novel way to look at the game, it would also eliminate some of the remaining weaknesses of computers. For example, computers still tend (in my experience) to misevaluate blockaded positions. But a deep net would say, “Oh, this position has an 85 percent chance of a draw, so I don’t want to go into it.”

Reply

brabo March 18, 2016 at 11:39 am

“One thing that I think would be interesting about a deep-learning based chess program is that instead of evaluating a position in terms of pawns (e.g., White has a 0.73-pawn advantage) it would evaluate it in terms of winning probabilities.”
Also that already exists. A couple of years ago there was on chessbase an article about calibrated evaluations: http://en.chessbase.com/post/houdini-4-the-800-pound-gorilla-2

I take out the most relevant part:
” Houdini 4 uses calibrated evaluations in which engine scores correlate directly with the win expectancy in the position. A +1.00 pawn advantage gives a 80% chance of winning the game against an equal opponent at blitz time control. At +2.00 the engine will win 95% of the time, and at +3.00 about 99% of the time. If the advantage is +0.50, expect to win nearly 50% of the time.”

Reply

Hal Bogner March 18, 2016 at 8:50 am

Paul B’s observation on your observation, Dana, is true in the world of physical sports, too. Against the very best goaltenders in ice hockey, even the best shooters hesitate or try to make too fine a shot, when they “had the goalie beat”. Dominic Hasek famously intimidated many shooters, and today, Jonathan Quick likewise causes many shooters to shoot over the net because they are so sure that low shots have zero chance of getting past him.

Reply

Leave a Comment

Previous post:

Next post: