Explaining Arguments’ Strength: Unveiling the Role of Attacks and Supports
Explaining Arguments’ Strength: Unveiling the Role of Attacks and Supports
Xiang Yin, Nico Potyka, Francesca Toni
Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence
Main Track. Pages 3622-3630.
https://doi.org/10.24963/ijcai.2024/401
Quantitatively explaining the strength of arguments under gradual semantics has recently received increasing attention. Specifically, several works in the literature provide quantitative explanations by computing the attribution scores of arguments. These works disregard the importance of attacks and supports, even though they play an essential role when explaining arguments' strength. In this paper, we propose a novel theory of Relation Attribution Explanations (RAEs), adapting Shapley values from game theory to offer fine-grained insights into the role of attacks and supports in quantitative bipolar argumentation towards obtaining the arguments' strength. We show that RAEs satisfy several desirable properties. We also propose a probabilistic algorithm to approximate RAEs efficiently. Finally, we show the application value of RAEs in fraud detection and large language models case studies.
Keywords:
Knowledge Representation and Reasoning: KRR: Argumentation
AI Ethics, Trust, Fairness: ETF: Explainability and interpretability