PEORL: Integrating Symbolic Planning and Hierarchical Reinforcement Learning for Robust Decision-Making
PEORL: Integrating Symbolic Planning and Hierarchical Reinforcement Learning for Robust Decision-Making
Fangkai Yang, Daoming Lyu, Bo Liu, Steven Gustafson
Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence
Main track. Pages 4860-4866.
https://doi.org/10.24963/ijcai.2018/675
Reinforcement learning and symbolic planning have both been used to build intelligent autonomous agents. Reinforcement learning relies on learning from interactions with real world, which often requires an unfeasibly large amount of experience. Symbolic planning relies on manually crafted symbolic knowledge, which may not be robust to domain uncertainties and changes. In this paper we present a unified framework PEORL that integrates symbolic planning with hierarchical reinforcement learning (HRL) to cope with decision-making in dynamic environment with uncertainties. Symbolic plans are used to guide the agent's task execution and learning, and the learned experience is fed back to symbolic knowledge to improve planning. This method leads to rapid policy search and robust symbolic plans in complex domains. The framework is tested on benchmark domains of HRL.
Keywords:
Knowledge Representation and Reasoning: Action, Change and Causality
Knowledge Representation and Reasoning: Common-Sense Reasoning
Knowledge Representation and Reasoning: Knowledge Representation Languages
Machine Learning: Reinforcement Learning
Planning and Scheduling: Applications of Planning
Machine Learning Applications: Applications of Reinforcement Learning