Cooperation and Control in Delegation Games

Cooperation and Control in Delegation Games

Oliver Sourbut, Lewis Hammond, Harriet Wood

Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence
Main Track. Pages 229-237. https://doi.org/10.24963/ijcai.2024/26

Many settings of interest involving humans and machines – from virtual personal assistants to autonomous vehicles – can naturally be modelled as principals (humans) delegating to agents (machines), which then interact with each other on their principals’ behalf. We refer to these multi-principal, multi-agent scenarios as delegation games. In such games, there are two important failure modes: problems of control (where an agent fails to act in line their principal’s preferences) and problems of cooperation (where the agents fail to work well together). In this paper we formalise and analyse these problems, further breaking them down into issues of alignment (do the players have similar preferences?) and capabilities (how competent are the players at satisfying those preferences?). We show – theoretically and empirically – how these measures determine the principals’ welfare, how they can be estimated using limited observations, and thus how they might be used to help us design more aligned and cooperative AI systems.
Keywords:
Agent-based and Multi-agent Systems: MAS: Coordination and cooperation
AI Ethics, Trust, Fairness: ETF: Safety and robustness
Game Theory and Economic Paradigms: GTEP: Other
Humans and AI: HAI: Human-AI collaboration