A Little of That Human Touch: Achieving Human-Centric Explainable AI via Argumentation

A Little of That Human Touch: Achieving Human-Centric Explainable AI via Argumentation

Antonio Rago

Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence
Early Career. Pages 8565-8570. https://doi.org/10.24963/ijcai.2024/983

As data-driven AI models achieve unprecedented feats across previously unthinkable tasks, the diminishing levels of interpretability of their increasingly complex architectures can often be sidelined in place of performance. If we are to comprehend and trust these AI models as they advance, it is clear that symbolic methods, given their unparalleled strengths in knowledge representation and reasoning, can play an important role in explaining AI models. In this paper, I discuss some of the ways in which one branch of such methods, computational argumentation, given its human-like nature, can be used to tackle this problem. I first outline a general paradigm for this area of explainable AI, before detailing a prominent methodology therein which we have pioneered. I then illustrate how this approach has been put into practice with diverse AI models and types of explanations, before looking ahead to challenges, future work and the outlook in this field.
Keywords:
Knowledge Representation and Reasoning: KRR: Argumentation
AI Ethics, Trust, Fairness: ETF: Explainability and interpretability
AI Ethics, Trust, Fairness: ETF: Trustworthy AI
Agent-based and Multi-agent Systems: MAS: Human-agent interaction