SlateQ: A Tractable Decomposition for Reinforcement Learning with Recommendation Sets
SlateQ: A Tractable Decomposition for Reinforcement Learning with Recommendation Sets
Eugene Ie, Vihan Jain, Jing Wang, Sanmit Narvekar, Ritesh Agarwal, Rui Wu, Heng-Tze Cheng, Tushar Chandra, Craig Boutilier
Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence
Main track. Pages 2592-2599.
https://doi.org/10.24963/ijcai.2019/360
Reinforcement learning methods for recommender systems optimize recommendations for long-term user engagement. However, since users are often presented with slates of multiple items---which may have interacting effects on user choice---methods are required to deal with the combinatorics of the RL action space. We develop SlateQ, a decomposition of value-based temporal-difference and Q-learning that renders RL tractable with slates. Under mild assumptions on user choice behavior, we show that the long-term value (LTV) of a slate can be decomposed into a tractable function of its component item-wise LTVs. We demonstrate our methods in simulation, and validate the scalability and effectiveness of decomposed TD-learning on YouTube.
Keywords:
Machine Learning: Reinforcement Learning
Machine Learning: Recommender Systems
Machine Learning Applications: Applications of Reinforcement Learning