3260 papers • 126 benchmarks • 313 datasets
Real-Time Strategy (RTS) tasks involve training an agent to play video games with continuous gameplay and high-level macro-strategic goals such as map control, economic superiority and more. ( Image credit: Multi-platform Version of StarCraft: Brood War in a Docker Container )
(Image credit: Papersgraph)
These leaderboards are used to track progress in video-games
No benchmarks available.
Use these libraries to find video-games models and implementations
This paper introduces SC2LE (StarCraft II Learning Environment), a reinforcement learning environment based on the StarCraft II game that offers a new and challenging environment for exploring deep reinforcement learning algorithms and architectures and gives initial baseline results for neural networks trained from this data to predict game outcomes and player actions.
The StarCraft Multi-Agent Challenge (SMAC), based on the popular real-time strategy game StarCraft II, is proposed as a benchmark problem and an open-source deep multi-agent RL learning framework including state-of-the-art algorithms is opened.
This is the first public work to investigate AI agents that can defeat the built-in AI in the StarCraft II full game, and the AI agent TStarBot1 is based on deep reinforcement learning over a flat action structure and theAI agent T starBot2 isbased on hard-coded rules over a hierarchical action structure.
It is demonstrated that stress and concentration levels for professional players are less correlated, meaning more independent playstyle, and that the absence of team communication does not affect the professional players as much as amateur ones.
Gym-JLRTS (pronounced “gym-micro-RTS”) is introduced as a fast-to-run RL environment for full-game RTS research and a collection of techniques to scale DRL to play full- game µRTS as well as ablation studies to demonstrate their empirical importance.
ELF, an Extensive, Lightweight and Flexible platform for fundamental reinforcement learning research, is proposed and it is shown that a network with Leaky ReLU and Batch Normalization coupled with long-horizon training and progressive curriculum beats the rule-based built-in AI more than $70\% of the time in the full game of Mini-RTS.
In recent years, Deep Reinforcement Learning (DRL) algorithms have achieved state-of-the-art performance in many challenging strategy games. Because these games have complicated rules, an action sampled from the full discrete action distribution predicted by the learned policy is likely to be invalid according to the game rules (e.g., walking into a wall). The usual approach to deal with this problem in policy gradient algorithms is to “mask out” invalid actions and just sample from the set of valid actions. The implications of this process, however, remain under-investigated. In this paper, we 1) show theoretical justification for such a practice, 2) empirically demonstrate its importance as the space of invalid actions grows, and 3) provide further insights by evaluating different action masking regimes, such as removing masking after an agent has been trained using masking.
This white paper argues for using RTS games as a benchmark for AI research, and describes the design and components of TorchCraft, a library that enables deep learning research on Real-Time Strategy games such as StarCraft: Brood War.
RTMM, a real-time variant of the standard minimax algorithm, is presented and its applicability in the context of RTS games is discussed, its strengths and weaknesses are discussed, and it is evaluated in two real- time games.
Adding a benchmark result helps the community track progress.