Knowledge Distillation in Federated Learning: A Practical Guide

Knowledge Distillation in Federated Learning: A Practical Guide

Alessio Mora, Irene Tenison, Paolo Bellavista, Irina Rish

Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence
Survey Track. Pages 8188-8196. https://doi.org/10.24963/ijcai.2024/905

Federated Learning (FL) enables the training of Deep Learning models without centrally collecting possibly sensitive raw data. The most used algorithms for FL are parameter-averaging based schemes (e.g., Federated Averaging) that, however, have well known limits, i.e., model homogeneity, high communication cost, poor performance in presence of heterogeneous data distributions. Federated adaptations of regular Knowledge Distillation (KD) can solve or mitigate the weaknesses of parameter-averaging FL algorithms while possibly introducing other trade-offs. In this article, we originally present a focused review of the state-of-the-art KD-based algorithms specifically tailored for FL, by providing both a novel classification of the existing approaches and a detailed technical description of their pros, cons, and tradeoffs.
Keywords:
Machine Learning: ML: Federated learning
Machine Learning: ML: Ensemble methods