3260 papers • 126 benchmarks • 313 datasets
Most reinforcement learning research papers focus on environments where the agent’s actions are either discrete or continuous. However, when training an agent to play a video game, it is common to encounter situations where actions have both discrete and continuous components. For example, a set of high-level discrete actions (ex: move, jump, fire), each of them being associated with continuous parameters (ex: target coordinates for the move action, direction for the jump action, aiming angle for the fire action). These kinds of tasks are included in Control with Parameterised Actions.
(Image credit: Papersgraph)
These leaderboards are used to track progress in control-with-prametrised-actions-10
Use these libraries to find control-with-prametrised-actions-10 models and implementations
No datasets available.
No subtasks available.
It is empirically demonstrated that MP-DQN significantly outperforms P-D QN and other previous algorithms in terms of data efficiency and converged policy performance on the Platform, Robot Soccer Goal, and Half Field Offense domains.
It is shown that Hybrid SAC can successfully solve a highspeed driving task in one of the authors' games, and is competitive with the state-of-the-art on parameterized actions benchmark tasks.
Adding a benchmark result helps the community track progress.