Case-Based Reasoning with Language Models for Classification of Logical Fallacies
Case-Based Reasoning with Language Models for Classification of Logical Fallacies
Zhivar Sourati, Filip Ilievski, Hông-Ân Sandlin, Alain Mermoud
Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence
Main Track. Pages 5188-5196.
https://doi.org/10.24963/ijcai.2023/576
The ease and speed of spreading misinformation and propaganda on the Web motivate the need to develop trustworthy technology for detecting fallacies in natural language arguments. However, state-of-the-art language modeling methods exhibit a lack of robustness on tasks like logical fallacy classification that require complex reasoning. In this paper, we propose a Case-Based Reasoning method that classifies new cases of logical fallacy by language-modeling-driven retrieval and adaptation of historical cases. We design four complementary strategies to enrich input representation for our model, based on external information about goals, explanations, counterarguments, and argument structure. Our experiments in in-domain and out-of-domain settings indicate that Case-Based Reasoning improves the accuracy and generalizability of language models. Our ablation studies suggest that representations of similar cases have a strong impact on the model performance, that models perform well with fewer retrieved cases, and that the size of the case database has a negligible effect on the performance. Finally, we dive deeper into the relationship between the properties of the retrieved cases and the model performance.
Keywords:
Natural Language Processing: NLP: Information retrieval and text mining
AI Ethics, Trust, Fairness: ETF: Explainability and interpretability
Knowledge Representation and Reasoning: KRR: Case-based reasoning