Probabilistically Robust Watermarking of Neural Networks
Probabilistically Robust Watermarking of Neural Networks
Mikhail Pautov, Nikita Bogdanov, Stanislav Pyatkin, Oleg Rogov, Ivan Oseledets
Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence
Main Track. Pages 4778-4787.
https://doi.org/10.24963/ijcai.2024/528
As deep learning (DL) models are widely and effectively used in Machine Learning as a Service (MLaaS) platforms, there is a rapidly growing interest in DL watermarking techniques that can be used to confirm the ownership of a particular model. Unfortunately, these methods usually produce watermarks susceptible to model stealing attacks. In our research, we introduce a novel trigger set-based watermarking approach that demonstrates resilience against functionality stealing attacks, particularly those involving extraction and distillation. Our approach does not require additional model training and can be applied to any model architecture. The key idea of our method is to compute the trigger set, which is transferable between the source model and the set of proxy models with a high probability. In our experimental study, we show that if the probability of the set being transferable is reasonably high, it can be effectively used for ownership verification of the stolen model. We evaluate our method on multiple benchmarks and show that our approach outperforms current state-of-the-art watermarking techniques in all considered experimental setups.
Keywords:
Machine Learning: ML: Adversarial machine learning
AI Ethics, Trust, Fairness: ETF: Safety and robustness
AI Ethics, Trust, Fairness: ETF: Trustworthy AI
Uncertainty in AI: UAI: Applications