Adversarial Explanations for Knowledge Graph Embeddings

Adversarial Explanations for Knowledge Graph Embeddings

Patrick Betz, Christian Meilicke, Heiner Stuckenschmidt

Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence
Main Track. Pages 2820-2826. https://doi.org/10.24963/ijcai.2022/391

We propose a novel black-box approach for performing adversarial attacks against knowledge graph embedding models. An adversarial attack is a small perturbation of the data at training time to cause model failure at test time. We make use of an efficient rule learning approach and use abductive reasoning to identify triples which are logical explanations for a particular prediction. The proposed attack is then based on the simple idea to suppress or modify one of the triples in the most confident explanation. Although our attack scheme is model independent and only needs access to the training data, we report results on par with state-of-the-art white-box attack methods that additionally require full access to the model architecture, the learned embeddings, and the loss functions. This is a surprising result which indicates that knowledge graph embedding models can partly be explained post hoc with the help of symbolic methods.
Keywords:
Machine Learning: Relational Learning
Knowledge Representation and Reasoning: Diagnosis and Abductive Reasoning
Knowledge Representation and Reasoning: Learning and reasoning
Machine Learning: Adversarial Machine Learning
Machine Learning: Explainable/Interpretable Machine Learning