Reconciling Rewards with Predictive State Representations
Reconciling Rewards with Predictive State Representations
Andrea Baisero, Christopher Amato
Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence
Main Track. Pages 2170-2176.
https://doi.org/10.24963/ijcai.2021/299
Predictive state representations (PSRs) are models of controlled non-Markov
observation sequences which exhibit the same generative process governing
POMDP observations without relying on an underlying latent state. In that
respect, a PSR is indistinguishable from the corresponding POMDP. However,
PSRs notoriously ignore the notion of rewards, which undermines the general
utility of PSR models for control, planning, or reinforcement learning.
Therefore, we describe a sufficient and necessary accuracy condition
which determines whether a PSR is able to accurately model POMDP rewards, we
show that rewards can be approximated even when the accuracy condition is not
satisfied, and we find that a non-trivial number of POMDPs taken from a
well-known third-party repository do not satisfy the accuracy condition.
We propose reward-predictive state representations (R-PSRs), a
generalization of PSRs which accurately models both observations and rewards,
and develop value iteration for R-PSRs. We show that there is a mismatch
between optimal POMDP policies and the optimal PSR policies derived from
approximate rewards. On the other hand, optimal R-PSR policies perfectly
match optimal POMDP policies, reconfirming R-PSRs as accurate state-less
generative models of observations and rewards.
Keywords:
Machine Learning: Reinforcement Learning
Planning and Scheduling: POMDPs
Uncertainty in AI: Uncertainty Representations