A Conservative Approach for Few-Shot Transfer in Off-Dynamics Reinforcement Learning
A Conservative Approach for Few-Shot Transfer in Off-Dynamics Reinforcement Learning
Paul Daoudi, Christophe Prieur, Bogdan Robu, Merwan Barlier, Ludovic Dos Santos
Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence
Main Track. Pages 3890-3898.
https://doi.org/10.24963/ijcai.2024/430
Off-dynamics Reinforcement Learning (ODRL) seeks to transfer a policy from a source environment to a target environment characterized by distinct yet similar dynamics. In this context, traditional RL agents depend excessively on the dynamics of the source environment, resulting in the discovery of policies that excel in this environment but fail to provide reasonable performance in the target one. In the few-shot framework, a limited number of transitions from the target environment are introduced to facilitate a more effective transfer. Addressing this challenge, we propose an innovative approach inspired by recent advancements in Imitation Learning and Conservative RL algorithms. This method introduces a penalty to regulate the trajectories generated by the source-trained policy. We evaluate our method across various environments representing diverse off-dynamics conditions, where access to the target environment is extremely limited. These experiments include high-dimensional systems relevant to real-world applications. Across most tested scenarios, our proposed method demonstrates performance improvements compared to existing baselines.
Keywords:
Machine Learning: ML: Reinforcement learning