GAIG Game AI Research Group @ QMUL

General Win Prediction from Agent Experience


Abstract

The question of whether the correct algorithm is used for the problem at hand usually comes at the end of execution, when the algorithm's ability to solve the problem (or not) can be verified. But what if this question could be answered in advance, with enough notice to make changes in the approach in order for it to be more successful? This paper proposes a general agent performance prediction system, tested in real time within the context of the General Video Game AI framework. It is solely based on agent features, therefore removing potential human bias produced by game-based features observed in known games. Three different models can be queried while playing the game to determine whether the agent will win or lose, based on the current game state: early, mid and late game feature models. The models are trained on 80 games in the framework and tested on 20 new games, for 14 variations of 3 different methods. Results are positive, indicating that there is great scope for predicting the outcome of any given game.
URL: https://ieeexplore.ieee.org/document/8490439
Github: https://github.com/rdgain/ExperimentData/tree/GeneralWinPred-CIG-18 
DOI:10.1109/CIG.2018.8490439
YouTube: https://youtu.be/zq9zaEjspUY 

Cite this work

@inproceedings{gaina2018win,
author= {Raluca D. Gaina and Simon M. Lucas and Diego Perez-Liebana},
title= {{General Win Prediction from Agent Experience}},
year= {2018},
booktitle= {{Proc. of the IEEE Conference on Computational Intelligence and Games (CIG)}},
month= {Aug},
pages= {1--8},
keywords= {artificial intelligence;computer games;game theory;agent experience;game-based features;prediction system;video game AI framework;Games;Artificial intelligence;Feature extraction;Monte Carlo methods;Measurement;Sociology;general video game playing;rolling horizon evolution;monte carlo tree search;win prediction},
url= {https://ieeexplore.ieee.org/document/8490439},
doi= {10.1109/CIG.2018.8490439},
abstract= {The question of whether the correct algorithm is used for the problem at hand usually comes at the end of execution, when the algorithm's ability to solve the problem (or not) can be verified. But what if this question could be answered in advance, with enough notice to make changes in the approach in order for it to be more successful? This paper proposes a general agent performance prediction system, tested in real time within the context of the General Video Game AI framework. It is solely based on agent features, therefore removing potential human bias produced by game-based features observed in known games. Three different models can be queried while playing the game to determine whether the agent will win or lose, based on the current game state: early, mid and late game feature models. The models are trained on 80 games in the framework and tested on 20 new games, for 14 variations of 3 different methods. Results are positive, indicating that there is great scope for predicting the outcome of any given game.},
}

Comments

Content