Imperio: Language-Guided Backdoor Attacks for Arbitrary Model Control

Imperio: Language-Guided Backdoor Attacks for Arbitrary Model Control

Ka-Ho Chow, Wenqi Wei, Lei Yu

Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence
Main Track. Pages 704-712. https://doi.org/10.24963/ijcai.2024/78

Natural language processing (NLP) has received unprecedented attention. While advancements in NLP models have led to extensive research into their backdoor vulnerabilities, the potential for these advancements to introduce new backdoor threats remains unexplored. This paper proposes Imperio, which harnesses the language understanding capabilities of NLP models to enrich backdoor attacks. Imperio provides a new model control experience. Demonstrated through controlling image classifiers, it empowers the adversary to manipulate the victim model with arbitrary output through language-guided instructions. This is achieved using a language model to fuel a conditional trigger generator, with optimizations designed to extend its language understanding capabilities to backdoor instruction interpretation and execution. Our experiments across three datasets, five attacks, and nine defenses confirm Imperio's effectiveness. It can produce contextually adaptive triggers from text descriptions and control the victim model with desired outputs, even in scenarios not encountered during training. The attack reaches a high success rate without compromising the accuracy of clean inputs and exhibits resilience against representative defenses. Supplementary materials are available at https://khchow.com/Imperio.
Keywords:
Computer Vision: CV: Adversarial learning, adversarial attack and defense methods
AI Ethics, Trust, Fairness: ETF: Safety and robustness
AI Ethics, Trust, Fairness: ETF: Trustworthy AI