Boosting Efficiency in Task-Agnostic Exploration through Causal Knowledge
Boosting Efficiency in Task-Agnostic Exploration through Causal Knowledge
Yupei Yang, Biwei Huang, Shikui Tu, Lei Xu
Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence
Main Track. Pages 5344-5352.
https://doi.org/10.24963/ijcai.2024/591
The effectiveness of model training heavily relies on the quality of available training resources. However, budget constraints often impose limitations on data collection efforts. To tackle this challenge, we introduce causal exploration in this paper, a strategy that leverages the underlying causal knowledge for both data collection and model training. We, in particular, focus on enhancing the sample efficiency and reliability of the world model learning within the domain of task-agnostic reinforcement learning. During the exploration phase, the agent actively selects actions expected to yield causal insights most beneficial for world model training. Concurrently, the causal knowledge is acquired and incrementally refined with the ongoing collection of data. We demonstrate that causal exploration aids in learning accurate world models using fewer data and provide theoretical guarantees for its convergence. Empirical experiments, on both synthetic data and real-world applications, further validate the benefits of causal exploration. The source code is available at https://github.com/CMACH508/CausalExploration.
Keywords:
Machine Learning: ML: Reinforcement learning
Machine Learning: ML: Active learning
Machine Learning: ML: Causality
Uncertainty in AI: UAI: Causality, structural causal models and causal inference