Spear: Evaluate the Adversarial Robustness of Compressed Neural Models

Spear: Evaluate the Adversarial Robustness of Compressed Neural Models

Chong Yu, Tao Chen, Zhongxue Gan, Jiayuan Fan

Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence
Main Track. Pages 1598-1606. https://doi.org/10.24963/ijcai.2024/177

As Artificial Intelligence evolves, the neural models vulnerable to adversarial attacks may produce fatal results in critical applications. This paper mainly discusses the robustness of the compressed neural models facing adversarial attacks. A few studies discuss the interaction between model compression and adversarial attack. However, they focus on the robustness against the traditional attacks designed for the dense models, not the attacks intended explicitly for the compressed models, using sparsity and quantization techniques. Compressed models often have fewer parameters and smaller sizes that are more friendly to resource-limited devices than dense models, so they are widely deployed in various edge and mobile devices. However, introducing the sparsity and quantization into neural models further imposes higher attack risks. A specific adversarial attack method (Spear) is proposed to generate the particular adversarial attack samples for evaluating the robustness of the compressed models. The Spear attack finds minimal perturbations to create the attack samples to maximize the different behaviors between the compressed and dense reference models. We demonstrate the proposed Spear attack technique can generally be applied to various networks and tasks through quantitative and ablation experiments.
Keywords:
Computer Vision: CV: Adversarial learning, adversarial attack and defense methods
Machine Learning: ML: Adversarial machine learning
Machine Learning: ML: Robustness