Real-time games often require a combination of long-term and short-term planning as well as interleaved planning and execution. In our previous work, we introduced a hybrid planning and execution approach, in which high-level strategical planning is performed by a Hierarchical Task Network Planner and micro-management is done through Monte Carlo Tree Search. We use evaluation functions that represent weighted sums of selected game features as an interface between the two hierarchy levels. In this work, we present a way of automatically evolving the weights of these evaluation functions in order to improve the efficiency of the execution of high-level tasks. We compare the agent using the evolved evaluation functions with the one using manually created evaluation functions against state-of-theart controllers in the Real Time Strategy game environment microRTS.
Cite this work
@inproceedings{neufeld2019evolving, author= {Neufeld, Xenija and Mostaghim, Sanaz and Perez-Liebana, Diego}, title= {{Evolving Game State Evaluation Functions for a Hybrid Planning Approach}}, year= {2019}, booktitle= {{IEEE Conference on Games (COG)}}, pages= {1--8}, abstract= {Real-time games often require a combination of long-term and short-term planning as well as interleaved planning and execution. In our previous work, we introduced a hybrid planning and execution approach, in which high-level strategical planning is performed by a Hierarchical Task Network Planner and micro-management is done through Monte Carlo Tree Search. We use evaluation functions that represent weighted sums of selected game features as an interface between the two hierarchy levels. In this work, we present a way of automatically evolving the weights of these evaluation functions in order to improve the efficiency of the execution of high-level tasks. We compare the agent using the evolved evaluation functions with the one using manually created evaluation functions against state-of-theart controllers in the Real Time Strategy game environment microRTS.},
}