Abductive Knowledge Induction from Raw Data
Abductive Knowledge Induction from Raw Data
Wang-Zhou Dai, Stephen Muggleton
Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence
Main Track. Pages 1845-1851.
https://doi.org/10.24963/ijcai.2021/254
For many reasoning-heavy tasks with raw inputs, it is challenging to design an appropriate end-to-end pipeline to formulate the problem-solving process. Some modern AI systems, e.g., Neuro-Symbolic Learning, divide the pipeline into sub-symbolic perception and symbolic reasoning, trying to utilise data-driven machine learning and knowledge-driven problem-solving simultaneously. However, these systems suffer from the exponential computational complexity caused by the interface between the two components, where the sub-symbolic learning model lacks direct supervision, and the symbolic model lacks accurate input facts. Hence, they usually focus on learning the sub-symbolic model with a complete symbolic knowledge base while avoiding a crucial problem: where does the knowledge come from? In this paper, we present Abductive Meta-Interpretive Learning (MetaAbd) that unites abduction and induction to learn neural networks and logic theories jointly from raw data. Experimental results demonstrate that MetaAbd not only outperforms the compared systems in predictive accuracy and data efficiency but also induces logic programs that can be re-used as background knowledge in subsequent learning tasks. To the best of our knowledge, MetaAbd is the first system that can jointly learn neural networks from scratch and induce recursive first-order logic theories with predicate invention.
Keywords:
Knowledge Representation and Reasoning: Diagnosis and Abductive Reasoning
Knowledge Representation and Reasoning: Leveraging Knowledge and Learning
Machine Learning: Knowledge Aided Learning
Machine Learning: Neuro-Symbolic Methods