3260 papers • 126 benchmarks • 313 datasets
The task is to train an agent to play SNES games such as Super Mario. ( Image credit: Large-Scale Study of Curiosity-Driven Learning )
(Image credit: Papersgraph)
These leaderboards are used to track progress in video-games
Use these libraries to find video-games models and implementations
No subtasks available.
This work proposes to use convolutional network architectures to generate Q-values and updates for a large number of goals at once and shows that replacing the random actions in ε-greedy exploration by several actions towards feasible goals generates better exploratory trajectories on Montezuma's Revenge and Super Mario All-Stars games.
This paper trains a GAN to generate levels for Super Mario Bros using a level from the Video Game Level Corpus, and uses the champion A* agent from the 2009 Mario AI competition to assess whether a level is playable, and how many jumping actions are required to beat it.
A new learning environment is introduced, the Retro Learning Environment --- RLE, that can run games on the Super Nintendo Entertainment System, Sega Genesis and several other gaming consoles and is expandable, allowing for more video games and consoles to be easily added to the environment, while maintaining the same interface.
This paper performs the first large-scale study of purely curiosity-driven learning, i.e. without any extrinsic rewards, across 54 standard benchmark environments, including the Atari game suite, and shows surprisingly good performance.
Adding a benchmark result helps the community track progress.