An OpenAI-gym-like gaming environment is created with the game of Little Fighter 2 (LF2), and a novel A3C+ network is presented for learning RL agents, which includes a Recurrent Info network, which utilizes game-related info features with recurrent layers to observe combo skills for fighting.
Deep reinforcement learning has shown its success in game playing. However, 2.5D fighting games would be a challenging task to handle due to ambiguity in visual appearances like height or depth of the characters. Moreover, actions in such games typically involve particular sequential action orders, which also makes the network design very difficult. Based on the network of Asynchronous Advantage Actor-Critic (A3C), we create an OpenAI-gym-like gaming environment with the game of Little Fighter 2 (LF2), and present a novel A3C+ network for learning RL agents. The introduced model includes a Recurrent Info network, which utilizes game-related info features with recurrent layers to observe combo skills for fighting. In the experiments, we consider LF2 in different settings, which successfully demonstrates the use of our proposed model for learning 2.5D fighting games.
Yu-Jing Lin
1 papers
Po-Wei Wu
1 papers