GAIG Game AI Research Group @ QMUL

Enhanced Rolling Horizon Evolution Algorithm with Opponent Model Learning

2020
Tang, Zhentao and Zhu, Yuanheng and Zhao, Dongbin and Lucas, Simon M

Abstract

The Fighting Game AI Competition (FTGAIC) provides a challenging benchmark for 2-player video game AI: large action space, diverse styles of characters and abilities, and the real-time nature. We propose a novel algorithm that combines Rolling Horizon Evolution Algorithm (RHEA) with opponent model learning. The approach is readily applicable to any 2-player video game. In contrast to conventional RHEA, an opponent model is proposed and is optimized by supervised learning with cross-entropy and reinforcement learning with policy gradient and Q-learning respectively, based on history observations from opponent. The model is learned during the live gameplay. With the learned opponent model, the extended RHEA is able to make more realistic plans based on what the opponent is likely to do. This tends to lead to better results. We compared our approach directly with the bots from the FTGAIC 2018 competition, and found our method to significantly outperform all of them, for all three character. Furthermore, our proposed bot with the policy- gradient-based opponent model is the only one without using Monte-Carlo Tree Search (MCTS) among top five bots in the 2019 competition in which it achieved second place, while using much less domain knowledge than the winner.
URL: https://ieeexplore.ieee.org/document/9190073

Cite this work

@article{tang2020enhanced,
author= {Tang, Zhentao and Zhu, Yuanheng and Zhao, Dongbin and Lucas, Simon M},
title= {{Enhanced Rolling Horizon Evolution Algorithm with Opponent Model Learning}},
year= {2020},
journal= {{IEEE Transactions on Games}},
url= {https://ieeexplore.ieee.org/document/9190073},
abstract= {The Fighting Game AI Competition (FTGAIC) provides a challenging benchmark for 2-player video game AI: large action space, diverse styles of characters and abilities, and the real-time nature. We propose a novel algorithm that combines Rolling Horizon Evolution Algorithm (RHEA) with opponent model learning. The approach is readily applicable to any 2-player video game. In contrast to conventional RHEA, an opponent model is proposed and is optimized by supervised learning with cross-entropy and reinforcement learning with policy gradient and Q-learning respectively, based on history observations from opponent. The model is learned during the live gameplay. With the learned opponent model, the extended RHEA is able to make more realistic plans based on what the opponent is likely to do. This tends to lead to better results. We compared our approach directly with the bots from the FTGAIC 2018 competition, and found our method to significantly outperform all of them, for all three character. Furthermore, our proposed bot with the policy- gradient-based opponent model is the only one without using Monte-Carlo Tree Search (MCTS) among top five bots in the 2019 competition in which it achieved second place, while using much less domain knowledge than the winner.},
}

Comments

Content