Offline Policy Learning via Skill-step Abstraction for Long-horizon Goal-Conditioned Tasks
Offline Policy Learning via Skill-step Abstraction for Long-horizon Goal-Conditioned Tasks
Donghoon Kim, Minjong Yoo, Honguk Woo
Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence
Main Track. Pages 4282-4290.
https://doi.org/10.24963/ijcai.2024/473
Goal-conditioned (GC) policy learning often faces a challenge arising from the sparsity of rewards, when confronting long-horizon goals. To address the challenge, we explore skill-based GC policy learning in offline settings, where skills are acquired from existing data and long-horizon goals are decomposed into sequences of near-term goals that align with these skills. Specifically, we present an `offline GC policy learning via skill-step abstraction' framework (GLvSA) tailored for tackling long-horizon GC tasks affected by goal distribution shifts. In the framework, a GC policy is progressively learned offline in conjunction with the incremental modeling of skill-step abstractions on the data. We also devise a GC policy hierarchy that not only accelerates GC policy learning within the framework but also allows for parameter-efficient fine-tuning of the policy. Through experiments with the maze and Franka kitchen environments, we demonstrate the superiority and efficiency of our GLvSA framework in adapting GC policies to a wide range of long-horizon goals. The framework achieves competitive zero-shot and few-shot adaptation performance, outperforming existing GC policy learning and skill-based methods.
Keywords:
Machine Learning: ML: Reinforcement learning