FedET: A Communication-Efficient Federated Class-Incremental Learning Framework Based on Enhanced Transformer
FedET: A Communication-Efficient Federated Class-Incremental Learning Framework Based on Enhanced Transformer
Chenghao Liu, Xiaoyang Qu, Jianzong Wang, Jing Xiao
Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence
Main Track. Pages 3984-3992.
https://doi.org/10.24963/ijcai.2023/443
Federated Learning (FL) has been widely concerned for it enables decentralized learning while ensuring data privacy. However, most existing methods unrealistically assume that the classes encountered by local clients are fixed over time. After learning new classes, this impractical assumption will make the model's catastrophic forgetting of old classes significantly severe. Moreover, due to the limitation of communication cost, it is challenging to use large-scale models in FL, which will affect the prediction accuracy. To address these challenges, we propose a novel framework, Federated Enhanced Transformer (FedET), which simultaneously achieves high accuracy and low communication cost. Specifically, FedET uses Enhancer, a tiny module, to absorb and communicate new knowledge, and applies pre-trained Transformers combined with different Enhancers to ensure high precision on various tasks. To address local forgetting caused by new classes of new tasks and global forgetting brought by non-i.i.d class imbalance across different local clients, we proposed an Enhancer distillation method to modify the imbalance between old and new knowledge and repair the non-i.i.d. problem. Experimental results demonstrate that FedET's average accuracy on a representative benchmark dataset is 14.1% higher than the state-of-the-art method, while FedET saves 90% of the communication cost compared to the previous method.
Keywords:
Machine Learning: ML: Federated learning
Machine Learning: ML: Incremental learning