Deriving Provably Correct Explanations for Decision Trees: The Impact of Domain Theories
Deriving Provably Correct Explanations for Decision Trees: The Impact of Domain Theories
Gilles Audemard, Jean-Marie Lagniez, Pierre Marquis, Nicolas Szczepanski
Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence
Main Track. Pages 3688-3696.
https://doi.org/10.24963/ijcai.2024/408
We are interested in identifying the complexity of computing local explanations of various types given a decision tree, when the Boolean conditions used in the tree are not independent. This is usually the case when decision trees are learned from instances described using numerical or categorical attributes. In such a case, considering the domain theory indicating how the Boolean conditions occurring in the tree are logically connected is paramount to derive provably correct explanations. However, the nature of the domain theory may have a strong impact on the complexity of generating explanations. In this paper, we identify the complexity of deriving local explanations (abductive or contrastive) given a decision tree in the general case, and under several natural restrictions about the domain theory.
Keywords:
Machine Learning: ML: Explainable/Interpretable machine learning
Knowledge Representation and Reasoning: KRR: Computational complexity of reasoning