RoboGNN: Robustifying Node Classification under Link Perturbation
RoboGNN: Robustifying Node Classification under Link Perturbation
Sheng Guan, Hanchao Ma, Yinghui Wu
Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence
Main Track. Pages 3029-3035.
https://doi.org/10.24963/ijcai.2022/420
Graph neural networks (GNNs) have emerged as powerful approaches for graph representation learning and node classification. Nevertheless, they can be vulnerable (sensitive) to link perturbations due to structural noise or adversarial attacks. This paper introduces RoboGNN, a novel framework that simultaneously robustifies an input classifier to a counterpart with certifiable robustness, and suggests desired graph representation with auxiliary links to ensure the robustness guarantee. (1) We introduce (p,θ)-robustness, which characterizes the robustness guarantee of a GNN-based classifier if its performance is insensitive for at least θ fraction of a targeted set of nodes under any perturbation of a set of vulnerable links up to a bounded size p. (2) We present a co-learning framework that interacts model learning with graph structural learning to robustify an input model M to a (p,θ)-robustness counterpart. The framework also outputs the desired graph structures that ensure the robustness. Using real-world benchmark graphs, we experimentally verify that roboGNN can effectively robustify representative GNNs with guaranteed robustness, and desirable gains on accuracy.
Keywords:
Machine Learning: Classification
Machine Learning: Adversarial Machine Learning
Data Mining: Mining Graphs