3260 papers • 126 benchmarks • 313 datasets
Card games involve playing cards: the task is to train an agent to play the game with specified rules and beat other players.
(Image credit: Papersgraph)
These leaderboards are used to track progress in card-games-10
No benchmarks available.
Use these libraries to find card-games-10 models and implementations
An overview of the key components in RLCard is provided, a discussion of the design principles, a brief introduction of the interfaces, and comprehensive evaluations of the environments are provided.
This paper introduces the first scalable end-to-end approach to learning approximate Nash equilibria without prior domain knowledge, and combines fictitious self-play with deep reinforcement learning.
A novel neural network architecture is presented which generates an output sequence conditioned on an arbitrary number of input functions and allows both the choice of conditioning context and the granularity of generation, for example characters or tokens, to be marginalised, thus permitting scalable and effective training.
A new virtual environment for simulating a card game known as "Big 2" is introduced and the recently proposed "Proximal Policy Optimization" algorithm is used to train a deep neural network to play the game, purely learning via self-play, and finds that it outperforms amateur human players after only a relatively short amount of training time.
This work provides an overview of the current state-of-the-art of Artificial Intelligence methods for card games in general and their application to the use-case of the Swiss card game Jass.
A new deck recommendation system, named Q-DeckRec, is proposed, which learns a deck search policy during a training phase and uses it to solve deck building problem instances and requires less computational resources to build winning-effective decks after aTraining phase compared to several baseline methods.
This paper proposes a novel method to handle combinatorial actions, which it is called combinational Q-learning (CQL), and employs a two-stage network to reduce action space and also leverage order-invariant max-pooling operations to extract relationships between primitive actions.
Here it is shown that a single general purpose Artificial Intelligence program, called "Solvitaire", can be used to determine the winnability percentage of 45 different single-player card games with a 95% confidence interval of +/- 0.1% or better.
Collectible card games are played by tens of millions of players worldwide. Their intricate rules and diverse cards make them much harder than traditional card games. To win, players must be proficient in two interdependent tasks: deck building and battling. In this paper, we present a deep reinforcement learning approach for deck building in arena mode - an understudied game mode present in many collectible card games. In arena, the players build decks immediately before battling by drafting one card at a time from randomly presented candidates. We investigate three variants of the approach and perform experiments on Legends of Code and Magic, a collectible card game designed for AI research. Results show that our learned draft strategies outperform those of the best agents of the game. Moreover, a participant of the Strategy Card Game AI competition improves from tenth to fourth place when coupled with our best draft agent.
Adding a benchmark result helps the community track progress.