Abstract
Accommodating Human Variability in Human-Robot Teams through Theory of Mind
Laura M. Hiatt, Anthony M. Harrison, J. Gregory Trafton
The variability of human behavior during plan execution poses a difficult challenge for human-robot teams. In this paper, we use the concepts of theory of mind to enable robots to account for two sources of human variability during team operation. When faced with an unexpected action by a human teammate, a robot uses a simulation analysis of different hypothetical cognitive models of the human to identify the most likely cause for the human's behavior. This allows the cognitive robot to account for variances due to both different knowledge and beliefs about the world, as well as different possible paths the human could take with a given set of knowledge and beliefs. An experiment showed that cognitive robots equipped with this functionality are viewed as both more natural and intelligent teammates, compared to both robots who either say nothing when presented with human variability, and robots who simply point out any discrepancies between the human's expected, and actual, behavior. Overall, this analysis leads to an effective, general approach for determining what thought process is leading to a human's actions.