|
|
|
Human vs. Computer Go Competitions @ CIS-Flag Conferences |
Conference
- FUZZ-IEEE 2009
- IEEE WCCI 2010
- IEEE SSCI 2011
- FUZZ-IEEE 2011
- IEEE WCCI 2012
- FUZZ-IEEE 2013
- FUZZ-IEEE 2015
|
Human vs. Computer Go Competitions |
Held Activities
- 2008 Computational Intelligence Forum & World 9x9 Computer Go Championship
- Taiwan Open 2009
- TAAI 2012
- MoGoTW
|
Description |
The technique of Monte Carlo Tree Search (MCTS) has revolutionized the field of computer game-playing, and is starting to have an impact in other search and optimization domains as well. In past decades, the dominant paradigm in game algorithms was alpha-beta search. This technique, with many refinements and game-specific engineering, lead to breakthrough performances in classic board games such as chess, checkers and Othello. After Deep Blue’s famous victory over Kasparov in 1996, some of the research focus shifted to games where alpha-beta search was not sufficient. Most prominent among these games was the ancient Asian game of Go. During the last few years, the use of MCTS techniques in Computer Go has really taken off, but the groundwork was laid much earlier. In 1990, Abramson proposed to model the expected outcome of a game by averaging the results of many random games. In 1993, Bruegmann proposed Monte-Carlo techniques for Go using almost random games, and developed the refinement he termed all-moves-as-first (AMAF). Ten years later, a group of French researchers working with Bruno Bouzy took up the idea. Bouzy’s Indigo program used Monte-Carlo simulation to decide between the top moves proposed by a classical knowledge-based Go engine. Remi Coulom’s Crazy Stone was the first to add the crucial second element, a selective game tree search controlled by the results of the simulations. he last piece of the puzzle was the Upper-Confidence Tree (UCT) algorithm of Kocsis and Szepesvari, which applied ideas from the theory of multi-armed bandits to the problem of how to selectively grow a game tree. Gelly and Wang developed the first version of MoGo, which among other innovations combined Coulom’s ideas, the UCT algorithm, and pattern-directed simulations. AMAF was revived and extended in Gelly and Silver’s Rapid Action Value Estimate (RAVE), which computes AMAF statistics in all nodes of the UCT tree. Rapid progress in applying knowledge and parallelizing the search followed. Today, programs such as MoGo/MoGoTW, Crazy Stone, Fuego, Many Faces of Go, and Zen have achieved a level of play that seemed unthinkable only a decade ago. These programs are now competitive at a professional level for 9 x9 Go and amateur Dan strength on 19x19.
One measure of success is competitions. In Go, Monte-Carlo programs now completely dominate classical programs on all board sizes (though no one has tried boards larger than 19x19). Monte-Carlo programs have achieved considerable success in play against humans. An early sign of things to come was a series of games on a 7x7 board between Crazy Stone and professional 5th Dan Guo Juan. Crazy Stone demonstrated almost perfect play. Since 2008, National University of Tainan (NUTN) in Taiwan and other academic organizations have hosted or organized several human vs. computer Go-related events, including the 2008 Computational Intelligence Forum & World 9x9 Computer Go Championship, and 2009 Invited Games for MoGo vs. Taiwan Professional Go Players (Taiwan Open 2009). Besides, the FUZZ-IEEE 2009: Panel, Invited Sessions, and Human vs. Computer Go Competition was held at the 2009 International Conference on Fuzzy Systems in Aug. 2009. This event was the first human vs. computer Go competition hosted by the IEEE Computational Intelligence Society (CIS) at the IEEE CIS flag conference. In 2010, MoGo and Many Faces of Go achieved wins against strong amateur players on 13x13 with only two handicap stones. On the full 19x19 board, programs have racked up a number of wins (but still a lot more losses) on 6 and 7 handicap stones against top professional Go players; also Zen recently won with handicap 4 against Masaki Takemiya 9p. Also, computer Go Programs have won both as White and Black against top players in 9x9 game.
In April 2011, MoGoTW broke a new world record by winning the first 13x13 game against the 5th Dan professional Go player with handicap 3 and reversed komi of 3.5. It also won 3 out of 4 games of Blind Go in 9x9. In June 2011, in the three-day completion held at FUZZ-IEEE 2011, there are four programs, including MoGoTW, Many Faces of Go, Fuego, and Zen, invited to join this competition, and more than ten invited professional Go players accept the challenge, including Chun-Hsun Chou (9P), Ping-Chiang Chou (5P), Joanne Missingham (5P), and Kai-Hsin Chang (4P). The computer Go program Zen from Japan won each competition even playing 19x19 game with Chun-Hsun Chou (9P) with handicap 6, showing that the level of computer Go programs in 19x19 game is estimated at 4D. Also, Many Faces of Go and Zen also won against a 5P Go player in 13x13, with handicap 2 and komi 3.5, improving the April's results by MoGoTW by one stone. In addition, MoGoTW also won all of twenty 7x7 games under a specific komi, that is, setting komi 9.5 and 8.5 as MoGoTW is White and Black, respectively, suggesting that the perfect play is a draw with komi 9. MoGoTW with the adaptive learning ability was first played with the amateur Go players from kyu level to dan level in Taiwan, on May 6 and May 27, 2012. Estimating the level of an opponent is useful for choosing the right strength of an opponent and for attributing relevant ranks to players. We estimate the relation between the strength of a player and the number of simulations needed for a MCTS to have the same strength. In addition, we also play against many players in the same time, and try to estimate their strength.
In June 2012, Human vs. Computer Go Competition @ IEEE WCCI 2012 saw physiological measurements for testing cognitive science on the game of Go [11]. The game results show that MoGoTW has outperformed humans in 7x7 Go, making no mistake whereas humans, including one 6P player, did mistakes. Humans are still strong in front of computers in 9x9 Go with komi 7. In June 2011, computers won four 13x13 games out of 8 against professional players with H2 and 3.5 komi (i.e. handicap roughly 1.5), the best performance so far [10]. For 13x13 kill-all Go, in Tainan in 2011, MoGoTW could win as White with H8 against Ping-Chiang Chou (5P), which is seemingly the best performance so far. For 19x19, there are still 4 handicap stones, giving to the program an advantage in each corner, so professional Go players mentioned that switching to handicap 3 is a huge challenge. Level assessment in Go shows that our artificial player is also able to analyze the strength of strong players. Importantly, along with this ability to evaluate the strength of humans, the computer can adaptively adjust its strength to the opponent, in order to increase entertainment, which is helpful for motivating children to learn. With the collected physiological signals, it will be feasible to analyze game-level statistics to understand the variance of strategies employed by the human and computer in each game [11].
Classical rankings (1 Dan, 2 Dan, ...) are integers, leading to a rather imprecise estimate of the opponent’s strengths. Therefore, a sample of games played against a computer are used to estimate the human’s strength [12]. In order to increase the precision, the strength of the computer is adapted from one move to the next by increasing or decreasing the computational power based on the current situation and the result of games. The human can decide some specific conditions, such as komi and board size. In [12], type fuzzy sets (T2 FSs) with parameters are optimized by a genetic algorithm for estimating the rank in a stable manner, independently of the board size. More precisely, an adaptive Monte Carlo Tree Search (MCTS) estimates the number of simulations, corresponding to the strength of its opponents. In FUZZ-IEEE 2013 held in India, July 2013 , professional and amateur Go players were invited to play with computer Go programs, including MoGoTW with adaptive learning (Taiwan / France), Coldmilk (Taiwan), Zen (Japan), and Many Faces of Go (USA) to analyze the human’s thinking behavior and computer Go program’s computational & machine-learning ability. Additionally, domain experts were interested in discussing over how to combine machine learning with fuzzy sets to construct adaptive learning to assess Go player’s rank.
In 2015, Human vs. Computer Go Competition (7th years) was held as a competition event during IEEE International Conference on Fuzzy Systems 2015 (FUZZ-IEEE 2015). One professional player Ping-Chiang Chou (Pro 6 Dan) and four strong amateur Go players Shi-Jim Yen, Ching-Nung Lin, Wen-Chih Chen (6 Dan), and Francesco Marigo (4 Dan) compete with the world best Go program (Zen) and other two strong Go bots (CGI and Jimmy) from two countries. As a result, there exists a significant gap between human and bots. Since 2013, the progress of computer Go is bounded as four handicaps against with professional Go players. Computation based searching algorithm does not improve as scaling up to thousands of cores. On the contrary, pattern-based knowledge interpreting becomes mainstream. How to translate millions of patterns to accurate categories plays an important role for future improvement. The professional Go player beat the world strongest program two games with four handicaps easily, but lost when the handicap increased to five. Also, this program won a strong amateur Go player as score 2:1 which is similar to the event held in FUZZ-IEEE 2013.
|
Purpose |
In order to enhance the fun in playing Go between humans and computer Go programs and to stimulate the development and researches of computer Go programs, we submit this proposal and hope to have a chance to hold Human vs. Computer Go Competition @ IEEE CIG 2015, Tayih Landis Hotel, Tainan, Taiwan. Currently, the level of computer Go program in 19x19 approaches one human with 6D. Hence, we will focus on human vs. computer Go competition against 19x19 without handicap in IEEE CIG 2015. The objective of the competition is to highlight an ongoing research on Computational Intelligence approaches as well as their applications on game domains. In addition, it is also hoped that the advances in computational intelligence will make more progress in the field of computer Go than before to achieve as much as computer chess or Chinese chess in the future.
|
Rule |
For more details about the Go Tournament Rules, please refer to the Go Tournament Rules of the Computer Olympiad published on Sept. 10, 2010.
|
Human |
Taiwanese Professional Go Players
- Chun-Hsun Chou (9P)
- Kai-Hsin Chang (5P)
- Li-Chun Yu (1P)
- Yung-Hao Hung (6D)
- Bo-Yuan Hsiao (5D)
- Chi Chang (5D)
|
|
Computer Go Program |
- Zen (Japan)
- CGI (Taiwan)
- Aya (Japan)
|
|
Reference |
[1]
B. Bouzy and T. Cazenave, “Computer Go: an AI-oriented
survey,”
Artificial Intelligence Journal, vol. 132, no. 1,
pp. 39-103, 2001.
[2]
C. S. Lee, M. Mueller, and O. Teytaud, “Special Issue on Monte
Carlo Techniques and Computer Go”,
IEEE Transactions
on Computational Intelligence and AI in Games, vol.
2, no. 4, pp. 225-228. Dec. 2010.
[3] Y. Wang and S. Gelly, “Modifications of UCT and
sequence-like simulations for Monte-Carlo Go,” in
Proceedings of the
2007 IEEE Symposium on Computational Intelligence and
Games (CIG07),
Hawaii, USA, 2007, pp. 175-182.
[4] C. S. Lee, M. H. Wang, C Chaslot, J. B. Hoock, A. Rimmel, O.
Teytaud, S. R. Tsai, S. C. Hsu, and T. P. Hong, “The
computational intelligence of MoGo revealed in Taiwan's
computer Go tournaments,”
IEEE Transactions on Computational Intelligence and AI in Games,
vol. 1, no. 1, pp. 73-89, Mar. 2009.
[5] C. S. Lee, M. H. Wang, T. P. Hong, G. Chaslot, J. B. Hoock, A.
Rimmel, O. Teytaud, and Y. H. Kuo, “A novel ontology for
computer Go knowledge management,” in Proceeding of the
2009 IEEE
International Conference on Fuzzy Systems (FUZZ-IEEE 2009), Jeju Island, Korea, Aug. 19-14, 2009, pp.
1056–106
[6]S. J. Yen, C. S. Lee, and O. Teytaud, “Human vs. computer Go
competition in FUZZ-IEEE 2009,”
ICGA Journal,
vol. 32, no. 3, pp. 178–181, Sept. 2009.
[7] J. B.Hoock, C. S. Lee, A. Rimmel, F. Teytaud, M. H. Wang, and
O. Teytaud, “Intelligent agents for the game of Go,”
IEEE Computational
Intelligence Magazine, vol. 5, no. 4, pp. 28-42,
Nov. 2010.
[8] C. S. Lee, M. H. Wang, O. Teytaud, and Y. L. Wang, “The game
of Go @ IEEE WCCI 2010,”
IEEE Computational
Intelligence Magazine, vol. 5, no. 4, pp. 6-7, Nov.
2010.
[9]M. H. Wang, C. S. Lee, Y. L. Wang, M. C. Cheng, O. Teytaud, and S. J. Yen, “The 2010 contest: MoGoTW vs. human Go players,” ICGA Journal, vol.
33, no. 1, pp. 47-50, Mar. 2010.
[10] S. J. Yen, S. Y. Chiu, C. W. Chou, C. S. Lee, H. Doghmen, F. Teytaud, and O. Teytaud, “Human vs. Computer Go Competition in FUZZ-IEEE 2011,” ICGA Journal, vol. 34, no. 4, pp. 243-247, Dec. 2011.
[11] C. S. Lee, O. Teytaud, M. H. Wang, and S. J Yen, “Computational Intelligence Meets Game of Go @ IEEE WCCI 2012,” IEEE Computational Intelligence Magazine, vol. 7, no. 4, pp. 10-12, Nov. 2012.
[12] C. S. Lee, M. H. Wang, M. J. Wu, O. Teytaud, and S. J. Yen, “T2FS-based adaptive linguistic assessment system for semantic analysis and human performance evaluation on game of Go,” IEEE Transactions on Fuzzy Systems, vol. 23, no. 2, pp. 400-419, Apr. 2015. |
|
|
|