Towards Trustable Explainable AI
Towards Trustable Explainable AI
Alexey Ignatiev
Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence
Early Career. Pages 5154-5158.
https://doi.org/10.24963/ijcai.2020/726
Explainable artificial intelligence (XAI) represents arguably one of the most crucial challenges being faced by the area of AI these days. Although the majority of approaches to XAI are of heuristic nature, recent work proposed the use of abductive reasoning to computing provably correct explanations for machine learning (ML) predictions. The proposed rigorous approach was shown to be useful not only for computing trustable explanations but also for validating explanations computed heuristically. It was also applied to uncover a close relationship between XAI and verification of ML models. This paper overviews the advances of the rigorous logic-based approach to XAI and argues that it is indispensable if trustable XAI is of concern.
Keywords:
Machine Learning: Explainable Machine Learning
Machine Learning: Classification
Constraints and SAT: Constraints and Data Mining ; Constraints and Machine Learning
Multidisciplinary Topics and Applications: Validation and Verification