Revisiting Neural Networks for Continual Learning: An Architectural Perspective

Revisiting Neural Networks for Continual Learning: An Architectural Perspective

Aojun Lu, Tao Feng, Hangjie Yuan, Xiaotian Song, Yanan Sun

Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence
Main Track. Pages 4651-4659. https://doi.org/10.24963/ijcai.2024/514

Efforts to overcome catastrophic forgetting have primarily centered around developing more effective Continual Learning (CL) methods. In contrast, less attention was devoted to analyzing the role of network architecture design (e.g., network depth, width, and components) in contributing to CL. This paper seeks to bridge this gap between network architecture design and CL, and to present a holistic study on the impact of network architectures on CL. This work considers architecture design at the network scaling level, i.e., width and depth, and also at the network components, i.e., skip connections, global pooling layers, and down-sampling. In both cases, we first derive insights through systematically exploring how architectural designs affect CL. Then, grounded in these insights, we craft a specialized search space for CL and further propose a simple yet effective ArchCraft method to steer a CL-friendly architecture, namely, this method recrafts AlexNet/ResNet into AlexAC/ResAC. Experimental validation across various CL settings and scenarios demonstrates that improved architectures are parameter-efficient, achieving state-of-the-art performance of CL while being 86%, 61%, and 97% more compact in terms of parameters than the naive CL architecture in Task IL and Class IL. Code is available at https://github.com/byyx666/ArchCraft.
Keywords:
Machine Learning: ML: Incremental learning
Computer Vision: CV: Machine learning for vision