ENOTO: Improving Offline-to-Online Reinforcement Learning with Q-Ensembles
ENOTO: Improving Offline-to-Online Reinforcement Learning with Q-Ensembles
Kai Zhao, Jianye Hao, Yi Ma, Jinyi Liu, Yan Zheng, Zhaopeng Meng
Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence
Main Track. Pages 5563-5571.
https://doi.org/10.24963/ijcai.2024/615
Offline reinforcement learning (RL) is a learning paradigm where an agent learns from a fixed dataset of experience. However, learning solely from a static dataset can limit the performance due to the lack of exploration. To overcome it, offline-to-online RL combines offline pre-training with online fine-tuning, which enables the agent to further refine its policy by interacting with the environment in real-time. Despite its benefits, existing offline-to-online RL methods suffer from performance degradation and slow improvement during the online phase. To tackle these challenges, we propose a novel framework called ENsemble-based Offline-To-Online (ENOTO) RL. By increasing the number of Q-networks, we seamlessly bridge offline pre-training and online fine-tuning without degrading performance. Moreover, to expedite online performance enhancement, we appropriately loosen the pessimism of Q-value estimation and incorporate ensemble-based exploration mechanisms into our framework. Experimental results demonstrate that ENOTO can substantially improve the training stability, learning efficiency, and final performance of existing offline RL methods during online fine-tuning on a range of locomotion and navigation tasks, significantly outperforming existing offline-to-online RL methods.
Keywords:
Machine Learning: ML: Reinforcement learning
Machine Learning: ML: Ensemble methods
Machine Learning: ML: Offline reinforcement learning
Machine Learning: ML: Online learning