Information-Theoretic Opacity-Enforcement in Markov Decision Processes

Information-Theoretic Opacity-Enforcement in Markov Decision Processes

Chongyang Shi, Yuheng Bu, Jie Fu

Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence
Main Track. Pages 6779-6787. https://doi.org/10.24963/ijcai.2024/749

The paper studies information-theoretic opacity, an information-flow privacy property, in a setting involving two agents: A planning agent who controls a stochastic system and an observer who partially observes the system states. The goal of the observer is to infer some secret, represented by a random variable, from its partial observations, while the goal of the planning agent is to make the secret maximally opaque to the observer while achieving a satisfactory total return. Modeling the stochastic system using a Markov decision process, two classes of opacity properties are considered---Last-state opacity is to ensure that the observer is uncertain if the last state is in a specific set and initial-state opacity is to ensure that the observer is unsure of the realization of the initial state. As the measure of opacity, we employ the Shannon conditional entropy capturing the information about the secret revealed by the observable. Then, we develop primal-dual policy gradient methods for opacity-enforcement planning subject to constraints on total returns. We propose novel algorithms to compute the policy gradient of entropy for each observation, leveraging message passing within the hidden Markov models. This gradient computation enables us to have stable and fast convergence. We demonstrate our solution of opacity-enforcement control through a grid world example.
Keywords:
Planning and Scheduling: PS: Planning with Incomplete Information
Planning and Scheduling: PS: Markov decisions processes
Planning and Scheduling: PS: Planning algorithms
Planning and Scheduling: PS: Planning under uncertainty