Abstract
On Stochastic Optimal Control and Reinforcement Learning by Approximate Inference (Extended Abstract) / 3052
Konrad Rawlik, Marc Toussaint, Sethu Vijayakumar
We present a reformulation of the stochastic optimal control problem in terms of KL divergence minimisation, not only providing a unifying perspective of previous approaches in this area, but also demonstrating that the formalism leads to novel practical approaches to the control problem. Specifically, a natural relaxation of the dual formulation gives rise to exact iterative solutions to the finite and infinite horizon stochastic optimal control problem, while direct application of Bayesian inference methods yields instances of risk sensitive control.