Protecting Split Learning by Potential Energy Loss

Protecting Split Learning by Potential Energy Loss

Fei Zheng, Chaochao Chen, Lingjuan Lyu, Xinyi Fu, Xing Fu, Weiqiang Wang, Xiaolin Zheng, Jianwei Yin

Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence
Main Track. Pages 5590-5598. https://doi.org/10.24963/ijcai.2024/618

As a practical privacy-preserving learning method, split learning has drawn much attention in academia and industry. However, its security is constantly being questioned since the intermediate results are shared during training and inference. In this paper, we focus on the privacy leakage from the forward embeddings of split learning. Specifically, since the forward embeddings contain too much information about the label, the attacker can either use a few labeled samples to fine-tune the top model or perform unsupervised attacks such as clustering to infer the true labels from the forward embeddings. To prevent such kind of privacy leakage, we propose the potential energy loss to make the forward embeddings more 'complicated', by pushing embeddings of the same class towards the decision boundary. Therefore, it is hard for the attacker to learn from the forward embeddings. Experiment results show that our method significantly lowers the performance of both fine-tuning attacks and clustering attacks.
Keywords:
Machine Learning: ML: Federated learning
AI Ethics, Trust, Fairness: ETF: Safety and robustness
Multidisciplinary Topics and Applications: MTA: Security and privacy