Hosts

IEEE CIS

FUZZ-IEEE 2017

NUTN, Taiwan
Co-sponsors

HeroIT.com Co. Ltd.

 
 
Overview of Competition

With the recent success of AlphaGo, there has been a lot of interest among students and professionals to apply machine learning to gaming and in particular to the game of Go. Several conferences have held competitions human vs. computer programs or computer programs against each other. While computer programs are already better than humans (even high level professionals), machine learning still offers interesting prospects, both from the fundamental points of view (1) to even further the limits of game playing (having programs playing against each other), (2) to better understand machine intelligence and compare it to human intelligence, and from the practical point of view of enhancing the human playing experience by coaching professionals to play better or training beginners. The latter problem raises interesting questions of explainability of machine game play. This competition will evaluate the potential of learning machines to teach humans.

Novelty

Previous human vs. machine Go playing competitions have focused on having machines compete with humans. This competition presents both this aspect and the new aspect of having machines and humans collaboration. In this competition students and researchers (ML competitors) will propose new machine learning techniques or apply existing one to create programs that play Go and/or teach humans to play, or suggest better moves. Interestingly, several high-end systems have now been made available as open-source, making it possible to build Go teaching systems on top of existing state-of-the-art game playing technologies. We will invite pre-selected humans and machines (Go competitors) to participate in a high-end Go tournament. For this first event, we intend to invite professional Go players and select proven computer Go systems. The professional Go players will evaluate the pedagogical capabilities of the programs designed by the ML competitors to provide good guidance on how to play.
To participate in the live competition, we propose the ML competitors should be pre-selected with the DyNaDF Platform (https://sites.google.com/site/dynadfgo/home) that we used in previous challenges. To simplify the task and make it possible for students to contribute, we will allow them to contribute a post-processing module building on top of an existing structure. This structure involves 3 stages: Stage 1 provides prediction results of the Darkforest Go engine (Facebook's deep learning Go player) [3], stage 2 infers results of the knowledge-based engine (based on the FML IEEE standard), and stage 3 combines the ML competitor model with the two previous stages to predict the possible winner of the game. We will supply training and test data taken from 60 games from Google Master vs. top professional Go players in Dec. 2016 and in Jan. 2017. The final stage of our system (Stage 4) will include a robot engine, which can speak and explain in real-time the situation to Go players. To that end the ML competitors will have to supply an explanation in text of the proposed (best) moves.
For pre-selection, the students should show that their proposed approach is viable and results in reasonable time.

Competition Description


(1) Testing Platform: DyNaDF Platform (NCHC, Taiwan / NUTN, Taiwan / OPU, Japan / TMU, Japan);
(2)Open Source: Darkforest Go Engine, FAIR, USA; (3)FML Tool: Giovanni Acampora Lab, Italy / KWS Center, Taiwan;
(3)Testing Data: KWS Center, Taiwan / OASE Lab., Taiwan / Nojima Lab., Japan / Saga Lab., Japan;
(4)Verification and Validation Go Player: Invited Top Professional Go;
(5)Players or Student Go Players from Japan / Taiwan / France / Italy / Canada

Rule

This competition will invite students to propose or apply a machine learning model for human prediction and application on game of Go @ DyNaDF Platform (https://sites.google.com/site/dynadfgo/home). According to the first-phase (Phase I) prediction results of darkforest Go engine and the second-phase (Phase II) inferred results of the FML assessment engine, students should propose or apply a machine learning model to the third-phase (Phase III) to predict the possible winner of the game. We choose 60 games from Google Master vs. top professional Go players in Dec. 2016 and in Jan. 2017 for the training data and testing data. Finally, we combine playing Go with the fourth-phase (Phase IV) robot engine to report real-time situation to Go players. The students should show the proposed approach is feasible. The organizers will have a demo game @ IFSA-SCIS 2017 in June/ Japan, FUZZ-IEEE 2017 in July/ Italy, or IEEE SMC 2017 in Oct./ Canada.

Student participants make classifiers using a data set of 60 games. The participants will submit the classifier. Then, we examine the generalization ability of the submitted classifiers using an unseen data set of some games. The winner of this competition is the person who develop the most generalized classifiers for unseen games.

Student participants propose a new framework to predict the game result. We, organizers, do not care how the participants use the data we provide. The winner of this competition is selected from this competition committee according to the idea of the framework the participants propose.

Tasks and Application Scenarios

DyNaDF Platform has four stages. Simply In the first-stage (Stage I), darkforest Go engine predicts the next move and winning rate. In the second-stage (Stage II), fuzzy markup language (FML) [4, 5] based assessment engine infers the current game situation using the outputs by darkforest Go engine [3]. Then, in the third stage, students can use four time windows which include (M1, M2, …, M11, and AM) x 4 to predict the game result. Mi represents the ith game situation. Each game situation is represented by an integer number (-2, -1, 0, 1, 2). AM is the partial result from (M1, M2, …, M11) of Stage II. We provide data gathered in Stage I and Stage II to students participants. The data consist of four time windows extracted from the time-series of game situations inferred by FML based assessment engine. Each time window includes the 11 previous game situations (M1, M2, ..., M11) and the partial result (AM) predicted in Stage I at the time. Each game situation is represented by an integer number (-2, -1, 0, 1, 2). In the thrid-stage (Stage III), the participants will make a classifier about the game result like Black wins (-1), draw (0), or White wins (-1). In the fourth-stage (Stage IV), the robot engines reports the current game situation to the user. The organizers will implement the developed classifiers to the robot engine. The followings are its brief descriptions:

Stage I (FAIR Darkforest Go / KWS Center): FB Darkforest Go Open Source on DyNaDF Platform;
Stage II (KWS Center/ OASE Lab): FML Assessment Engine on DyNaDF Platform;
Stage III (NUTN/OPU): Machine Learning Competition based on training data from Stage I and Stage II;
Stage IV (NUTN/ TMU): Robot Engine for Communication with Human Go Players and Demo/Testing;

Data

Regarding the provided data, we divide the competition 60 games between Master and top professional Go players in Dec. 2016 and in Jan. 2017. The data in each game were sampled from four sub-games. That is, there are four time windows for each game. We provide the partial data to the participants. Below is the data's descriptions:

We provide only the first time window of the game to the participants. That is, there are 12 inputs (M1, …, M11, and AM) for each pattern (game). The participants make classifiers to predict the final result using the game situations at the beginning of the game;
We provide the first and second time windows of the game to the participants. That is, there are 12 × 2 inputs (M1, …, M11, and AM) × 2 for each pattern (game). The participants make classifiers to predict the final result using the half of the game situations;
We provide the first and second time windows of the game to the participants. That is, there are 12 × 3 inputs (M1, …, M11, and AM) × 3 for each pattern (game). The participants make classifiers to predict the final result using most of the game situations;
We provide the first and second time windows of the game to the participants. That is, there are 12 × 4 inputs (M1, …, M11, and AM) × 4 for each pattern (game). The participants make classifiers to predict the final result using the four time windows of the game situations.

Expected Humans

Professional Go Players
- Chun-Hsun Chou (9P / Taiwan)

Expected Computer Go Programs
- Darkforest Open Source



         
Co-organizers

KWS, NUTN

CCS, Japan

MOST, Taiwan

HAMASTAR

NCHC

Hifong Weiqi Academy

Bureau of Education

Bureau of Education

KGS
 
  Copyright © 2017 NUTN, Taiwan All Rights Reserved Last Updated: Apr. 23, 2017