On Using Admissible Bounds for Learning Forward Search Heuristics
On Using Admissible Bounds for Learning Forward Search Heuristics
Carlos Núñez-Molina, Masataro Asai, Pablo Mesejo, Juan Fernandez-Olivares
Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence
Main Track. Pages 6761-6769.
https://doi.org/10.24963/ijcai.2024/747
In recent years, there has been growing interest in utilizing modern machine learning techniques to learn heuristic functions for forward search algorithms. Despite this, there has been little theoretical understanding of what they should learn, how to train them, and why we do so. This lack of understanding has resulted in the adoption of diverse training targets (suboptimal vs optimal costs vs admissible heuristics) and loss functions (e.g., square vs absolute errors) in the literature. In this work, we focus on how to effectively utilize the information provided by admissible heuristics in heuristic learning. We argue that learning from poly-time admissible heuristics by minimizing mean square errors (MSE) is not the correct approach, since its result is merely a noisy, inadmissible copy of an efficiently computable heuristic. Instead, we propose to model the learned heuristic as a truncated gaussian, where admissible heuristics are used not as training targets but as lower bounds of this distribution. This results in a different loss function from the MSE commonly employed in the literature, which implicitly models the learned heuristic as a gaussian distribution. We conduct experiments where both MSE and our novel loss function are applied to learning a heuristic from optimal plan costs. Results show that our proposed method converges faster during training and yields better heuristics.
Keywords:
Planning and Scheduling: PS: Learning in planning and scheduling
Machine Learning: ML: Knowledge-aided learning
Search: S: Heuristic search
Machine Learning: ML: Neuro-symbolic methods