GAIG Game AI Research Group @ QMUL

Analysis of Statistical Forward Planning Methods in Pommerman

2019
Diego Perez-Liebana and Raluca D. Gaina and Olve Drageset and Ercument Ilhan and Martin Balla and Simon M. Lucas

Abstract

Pommerman is a complex multi-player and partially observable game where agents try to be the last standing to win. This game poses very interesting challenges to AI, such as collaboration, learning and planning. In this paper, we compare two Statistical Forward Planning algorithms, Monte Carlo Tree Search (MCTS) and Rolling Horizon Evolutionary Algorithm (RHEA) in Pommerman. We provide insights on how the agents actually play the game, inspecting their behaviours to explain their performance. Results show that MCTS outperforms RHEA in several game settings, but leaving room for multiple avenues of future work: tuning these methods, improving opponent modelling, identifying trap moves and introducing of assumptions for partial observability settings.
URL: https://wvvw.aaai.org/ojs/index.php/AIIDE/article/view/5226
Github: https://github.com/GAIGResearch/java-pommerman 

Cite this work

@inproceedings{perez2019pommerman,
author= {Diego Perez-Liebana and Raluca D. Gaina and Olve Drageset and Ercument Ilhan and Martin Balla and Simon M. Lucas},
title= {{Analysis of Statistical Forward Planning Methods in Pommerman}},
year= {2019},
booktitle= {{Proceedings of the Artificial intelligence and Interactive Digital Entertainment (AIIDE)}},
volume= {15},
number= {1},
pages= {66--72},
url= {https://wvvw.aaai.org/ojs/index.php/AIIDE/article/view/5226},
abstract= {Pommerman is a complex multi-player and partially observable game where agents try to be the last standing to win. This game poses very interesting challenges to AI, such as collaboration, learning and planning. In this paper, we compare two Statistical Forward Planning algorithms, Monte Carlo Tree Search (MCTS) and Rolling Horizon Evolutionary Algorithm (RHEA) in Pommerman. We provide insights on how the agents actually play the game, inspecting their behaviours to explain their performance. Results show that MCTS outperforms RHEA in several game settings, but leaving room for multiple avenues of future work: tuning these methods, improving opponent modelling, identifying trap moves and introducing of assumptions for partial observability settings.},
}

Comments

Content