Learning Sparse Representations in Reinforcement Learning with Sparse Coding

Learning Sparse Representations in Reinforcement Learning with Sparse Coding

Lei Le, Raksha Kumaraswamy, Martha White

Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence
Main track. Pages 2067-2073. https://doi.org/10.24963/ijcai.2017/287

A variety of representation learning approaches have been investigated for reinforcement learning; much less attention, however, has been given to investigating the utility of sparse coding. Outside of reinforcement learning, sparse coding representations have been widely used, with non-convex objectives that result in discriminative representations. In this work, we develop a supervised sparse coding objective for policy evaluation. Despite the non-convexity of this objective, we prove that all local minima are global minima, making the approach amenable to simple optimization strategies. We empirically show that it is key to use a supervised objective, rather than the more straightforward unsupervised sparse coding approach. We then compare the learned representations to a canonical fixed sparse representation, called tile-coding, demonstrating that the sparse coding representation outperforms a wide variety of tile-coding representations.
Keywords:
Machine Learning: Feature Selection/Construction
Machine Learning: Reinforcement Learning