Bridging Differential Privacy and Byzantine-Robustness via Model Aggregation

Bridging Differential Privacy and Byzantine-Robustness via Model Aggregation

Heng Zhu, Qing Ling

Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence
Main Track. Pages 2427-2433. https://doi.org/10.24963/ijcai.2022/337

This paper aims at jointly addressing two seemly conflicting issues in federated learning: differential privacy (DP) and Byzantine-robustness, which are particularly challenging when the distributed data are non-i.i.d. (independent and identically distributed). The standard DP mechanisms add noise to the transmitted messages, and entangles with robust stochastic gradient aggregation to defend against Byzantine attacks. In this paper, we decouple the two issues via robust stochastic model aggregation, in the sense that our proposed DP mechanisms and the defense against Byzantine attacks have separated influence on the learning performance. Leveraging robust stochastic model aggregation, at each iteration, each worker calculates the difference between the local model and the global one, followed by sending the element-wise signs to the master node, which enables robustness to Byzantine attacks. Further, we design two DP mechanisms to perturb the uploaded signs for the purpose of privacy preservation, and prove that they are (epsilon,0)-DP by exploiting the properties of noise distributions. With the tools of Moreau envelop and proximal point projection, we establish the convergence of the proposed algorithm when the cost function is nonconvex. We analyze the trade-off between privacy preservation and learning performance, and show that the influence of our proposed DP mechanisms is decoupled with that of robust stochastic model aggregation. Numerical experiments demonstrate the effectiveness of the proposed algorithm.
Keywords:
Data Mining: Federated Learning
Machine Learning: Robustness
AI Ethics, Trust, Fairness: Safety & Robustness