Abstract

On Combining Decisions from Multiple Expert Imitators for Performance
On Combining Decisions from Multiple Expert Imitators for Performance
Jonathan Rubin, Ian Watson
One approach for artificially intelligent agents wishing to maximise some performance metric in a given domain is to learn from a collection of training data that consists of actions or decisions made by some expert, in an attempt to imitate that expert's style. We refer to this type of agent as an expert imitator. In this paper we investigate whether performance can be improved by combining decisions from multiple expert imitators. In particular, we investigate two existing approaches for combining decisions. The first approach combines decisions by employing ensemble voting between multiple expert imitators. The second approach dynamically selects the best imitator to use at runtime given the performance of the imitators in the current environment. We investigate these approaches in the domain of computer poker. In particular, we create expert imitators for limit and no limit Texas Hold'em and determine whether their performance can be improved by combining their decisions using the two approaches listed above.