BRPO: Batch Residual Policy Optimization
BRPO: Batch Residual Policy Optimization
Sungryull Sohn, Yinlam Chow, Jayden Ooi, Ofir Nachum, Honglak Lee, Ed Chi, Craig Boutilier
Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence
Main track. Pages 2824-2830.
https://doi.org/10.24963/ijcai.2020/391
In batch reinforcement learning (RL), one often constrains a learned policy to be close to the behavior (data-generating) policy, e.g., by constraining the learned action distribution to differ from the behavior policy by some maximum degree that
is the same at each state. This can cause batch RL to be overly conservative, unable to exploit large policy changes at frequently-visited, high-confidence states without risking poor performance at sparsely-visited states. To remedy this, we propose residual policies, where the allowable deviation of the learned policy is state-action-dependent. We derive a new for RL method, BRPO, which learns both the policy and allowable deviation that jointly maximize a lower bound on policy performance. We show that BRPO achieves the state-of-the-art performance in a number of tasks.
Keywords:
Machine Learning: Deep Reinforcement Learning
Machine Learning: Reinforcement Learning