COG-DICE: An Algorithm for Solving Continuous-Observation Dec-POMDPs
COG-DICE: An Algorithm for Solving Continuous-Observation Dec-POMDPs
Madison Clark-Turner, Christopher Amato
Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence
Main track. Pages 4573-4579.
https://doi.org/10.24963/ijcai.2017/638
The decentralized partially observable Markov decision process (Dec-POMDP) is a powerful model for representing multi-agent problems with decentralized behavior. Unfortunately, current Dec-POMDP solution methods cannot solve problems with continuous observations, which are common in many real-world domains. To that end, we present a framework for representing and generating Dec-POMDP policies that explicitly include continuous observations. We apply our algorithm to a novel tagging problem and an extended version of a common benchmark, where it generates policies that meet or exceed the values of equivalent discretized domains without the need for finding an adequate discretization.
Keywords:
Uncertainty in AI: Markov Decision Processes
Agent-based and Multi-agent Systems: Coordination and cooperation
Planning and Scheduling: Distributed/Multi-agent Planning