3260 papers • 126 benchmarks • 313 datasets
Go is an abstract strategy board game for two players, in which the aim is to surround more territory than the opponent. The task is to train an agent to play the game and be superior to other players.
(Image credit: Papersgraph)
These leaderboards are used to track progress in game-of-go
Use these libraries to find game-of-go models and implementations
No subtasks available.
This paper generalises the approach into a single AlphaZero algorithm that can achieve, tabula rasa, superhuman performance in many challenging domains, and convincingly defeated a world-champion program in each case.
Against human players, the newest versions, darkfores2, achieve a stable 3d level on KGS Go Server as a ranked bot, a substantial improvement upon the estimated 4k-5k ranks for DCNN reported in Clark & Storkey (2015) based on games against other machine players.
A new algorithm is introduced, Stochastic MuZero, that learns a stochastic model incorporating afterstates, and uses this model to perform a stochastic tree search and maintain the superhuman performance of standard MuZero in the game of Go.
The MuZero algorithm is presented, which, by combining a tree-based search with a learned model, achieves superhuman performance in a range of challenging and visually complex domains, without any knowledge of their underlying dynamics.
Convolutional neural networks trained to play Go are able to consistently defeat the well known Go program GNU Go, indicating it is state of the art among programs that do not use Monte Carlo Tree Search.
The experimental results demonstrate that the proposed FDAA can work effectively for Go applications and an FML-based Human-Machine Cooperative System for the game of Go.
ELF OpenGo is the first open-source Go AI to convincingly demonstrate superhuman performance with a perfect (20:0) record against global top professionals and is proposed, anopen-source reimplementation of the AlphaZero algorithm.
MoËT, a novel model based on Mixture of Experts, consisting of decision tree experts and a generalized linear model gating function, which is more expressive than the standard decision tree and can be used in real-world supervised problems on which it outperforms other verifiable machine learning models.
Adding a benchmark result helps the community track progress.