Look-ahead Search on Top of Policy Networks in Imperfect Information Games

Look-ahead Search on Top of Policy Networks in Imperfect Information Games

Ondřej Kubíček, Neil Burch, Viliam Lisý

Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence
Main Track. Pages 4344-4352. https://doi.org/10.24963/ijcai.2024/480

Search in test time is often used to improve the performance of reinforcement learning algorithms. Performing theoretically sound search in fully adversarial two-player games with imperfect information is notoriously difficult and requires a complicated training process. We present a method for adding test-time search to an arbitrary policy-gradient algorithm that learns from sampled trajectories. Besides the policy network, the algorithm trains an additional critic network, which estimates the expected values of players following various transformations of the policies given by the policy network. These values are then used for depth-limited search. We show how the values from this critic can create a value function for imperfect information games. Moreover, they can be used to compute the summary statistics necessary to start the search from an arbitrary decision point in the game. The presented algorithm is scalable to very large games since it does not require any search during train time. We evaluate the algorithm's performance when trained along Regularized Nash Dynamics, and we evaluate the benefit of using the search in the standard benchmark game of Leduc hold'em, multiple variants of imperfect information Goofspiel, and Battleships.
Keywords:
Machine Learning: ML: Multiagent Reinforcement Learning
Agent-based and Multi-agent Systems: MAS: Multi-agent learning
Search: S: Game playing
Search: S: Local search