Feature Norm Regularized Federated Learning: Utilizing Data Disparities for Model Performance Gains
Feature Norm Regularized Federated Learning: Utilizing Data Disparities for Model Performance Gains
Ke Hu, Liyao Xiang, Peng Tang, Weidong Qiu
Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence
Main Track. Pages 4136-4146.
https://doi.org/10.24963/ijcai.2024/457
Federated learning (FL) is a machine learning paradigm that aggregates knowledge and utilizes computational power from multiple participants to train a global model. However, a commonplace challenge—non-independent and identically distributed (non-i.i.d.) data across participants—can lead to significant divergence in model updates, thus diminishing training efficacy. In this paper, we propose the Feature Norm Regularized Federated Learning (FNR-FL) algorithm to tackle the non-i.i.d challenge. FNR-FL incorporates class average feature norms into the loss function by a straightforward yet effective regularization strategy. The core idea of FNR-FL is to penalize the deviations in the update directions of local models caused by the non-i.i.d data. Theoretically, we provide convergence guarantees for FNR-FL when training under non-i.i.d scenarios. Practically, our comprehensive experimental evaluations demonstrate that FNR-FL significantly outperforms existing FL algorithms in terms of test accuracy, and maintains a competitive convergence rate with lower communication overhead and shorter duration. Compared to FedAvg, FNR-FL exhibits a 66.24% improvement in accuracy and an 11.40% reduction in training time, underscoring its enhanced effectiveness and efficiency. The code is available on GitHub at: https://github.com/LonelyMoonDesert/FNR-FL.
Keywords:
Machine Learning: ML: Federated learning
Machine Learning: ML: Optimization
Machine Learning: ML: Robustness
Machine Learning: ML: Supervised Learning