From Optimization to Generalization: Fair Federated Learning against Quality Shift via Inter-Client Sharpness Matching
From Optimization to Generalization: Fair Federated Learning against Quality Shift via Inter-Client Sharpness Matching
Nannan Wu, Zhuo Kuang, Zengqiang Yan, Li Yu
Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence
Main Track. Pages 5199-5207.
https://doi.org/10.24963/ijcai.2024/575
Due to escalating privacy concerns, federated learning has been recognized as a vital approach for training deep neural networks with decentralized medical data. In practice, it is challenging to ensure consistent imaging quality across various institutions, often attributed to equipment malfunctions affecting a minority of clients. This imbalance in image quality can cause the federated model to develop an inherent bias towards higher-quality images, thus posing a severe fairness issue. In this study, we pioneer the identification and formulation of this new fairness challenge within the context of the imaging quality shift. Traditional methods for promoting fairness in federated learning predominantly focus on balancing empirical risks across diverse client distributions. This strategy primarily facilitates fair optimization across different training data distributions, yet neglects the crucial aspect of generalization. To address this, we introduce a solution termed Federated learning with Inter-client Sharpness Matching (FedISM). FedISM enhances both local training and global aggregation by incorporating sharpness-awareness, aiming to harmonize the sharpness levels across clients for fair generalization. Our empirical evaluations, conducted using the widely-used ICH and ISIC 2019 datasets, establish FedISM's superiority over current state-of-the-art federated learning methods in promoting fairness. Code is available at https://github.com/wnn2000/FFL4MIA.
Keywords:
Machine Learning: ML: Federated learning
AI Ethics, Trust, Fairness: ETF: Fairness and diversity
Computer Vision: CV: Bias, fairness and privacy
Computer Vision: CV: Biomedical image analysis