GAIG Game AI Research Group @ QMUL

Studying Believability Assessment in Racing Games


Abstract

Believability is a hard concept to define in video games. It depends on how and what one considers to be “believable”, which is often very subjective. In previous years, several researchers have tried to find ways of assessing such concepts in games through Turing Tests on agents, which were programmed to behave like a human instead of focusing only on winning. Examples are the Mario AI Competition and the 2K BotPrize. Given the small pool of explored parameters and a focus on programming the bots rather than the assessment, in this paper we present work examining different methods of evaluating believability in video games. We explore believability through recorded gameplay and allow judges to analyze it. However, we use different parameters - such as ranking rather than binary answers - for asking how human-like the presented behaviours are. The objective of this study is to analyze the different ways believability can be assessed, for humans and non-player characters (NPCs) by comparing how results between them and scores are affected in both when changing the parameters. In order to provide a more general analysis, the study is carried out using two different racing games rather than one. Results show that these parameters have indeed changed the overall results of the study and how important it is to be able to generalize these concepts in game AI, given how clear it is that believability is dependent on genre, game and even the design of the questionnaire.

Cite this work

@inproceedings{pacheco2018studying,
author= {Pacheco, Cristiana and Tokarchuk, Laurissa and Perez-Liebana, Diego},
title= {{Studying Believability Assessment in Racing Games}},
year= {2018},
booktitle= {{Proceedings of the 13th International Conference on the Foundations of Digital Games}},
pages= {1--10},
abstract= {Believability is a hard concept to define in video games. It depends on how and what one considers to be “believable”, which is often very subjective. In previous years, several researchers have tried to find ways of assessing such concepts in games through Turing Tests on agents, which were programmed to behave like a human instead of focusing only on winning. Examples are the Mario AI Competition and the 2K BotPrize. Given the small pool of explored parameters and a focus on programming the bots rather than the assessment, in this paper we present work examining different methods of evaluating believability in video games. We explore believability through recorded gameplay and allow judges to analyze it. However, we use different parameters - such as ranking rather than binary answers - for asking how human-like the presented behaviours are. The objective of this study is to analyze the different ways believability can be assessed, for humans and non-player characters (NPCs) by comparing how results between them and scores are affected in both when changing the parameters. In order to provide a more general analysis, the study is carried out using two different racing games rather than one. Results show that these parameters have indeed changed the overall results of the study and how important it is to be able to generalize these concepts in game AI, given how clear it is that believability is dependent on genre, game and even the design of the questionnaire.},
}

Comments

Content