Imprecise Probabilities Meet Partial Observability: Game Semantics for Robust POMDPs
Imprecise Probabilities Meet Partial Observability: Game Semantics for Robust POMDPs
Eline M. Bovy, Marnix Suilen, Sebastian Junges, Nils Jansen
Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence
Main Track. Pages 6697-6706.
https://doi.org/10.24963/ijcai.2024/740
Partially observable Markov decision processes (POMDPs) rely on the key assumption that probability distributions are precisely known. Robust POMDPs (RPOMDPs) alleviate this concern by defining imprecise probabilities, referred to as uncertainty sets. While robust MDPs have been studied extensively, work on RPOMDPs is limited and primarily focuses on algorithmic solution methods. We expand the theoretical understanding of RPOMDPs by showing that 1) different assumptions on the uncertainty sets affect optimal policies and values; 2) RPOMDPs have a partially observable stochastic game (POSG) semantic; and 3) the same RPOMDP with different assumptions leads to semantically different POSGs and, thus, different policies and values. These novel semantics for RPOMDPs give access to results for POSGs, studied in game theory; concretely, we show the existence of a Nash equilibrium. Finally, we classify the existing RPOMDP literature using our semantics, clarifying under which uncertainty assumptions these existing works operate.
Keywords:
Planning and Scheduling: PS: POMDPs
Agent-based and Multi-agent Systems: MAS: Formal verification, validation and synthesis
Uncertainty in AI: UAI: Sequential decision making