Automatic Mixed-Precision Quantization Search of BERT
Automatic Mixed-Precision Quantization Search of BERT
Changsheng Zhao, Ting Hua, Yilin Shen, Qian Lou, Hongxia Jin
Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence
Main Track. Pages 3427-3433.
https://doi.org/10.24963/ijcai.2021/472
Pre-trained language models such as BERT have shown remarkable effectiveness in various natural language processing tasks. However, these models usually contain millions of parameters, which prevent them from the practical deployment on resource-constrained devices. Knowledge distillation, Weight pruning, and Quantization are known to be the main directions in model compression. However, compact models obtained through knowledge distillation may suffer from significant accuracy drop even for a relatively small compression ratio. On the other hand, there are only a few attempts based on quantization designed for natural language processing tasks, and they usually require manual setting on hyper-parameters. In this paper, we proposed an automatic mixed-precision quantization framework designed for BERT that can conduct quantization and pruning simultaneously. Specifically, our proposed method leverages Differentiable Neural Architecture Search to assign scale and precision for parameters in each sub-group automatically, and at the same pruning out redundant groups of parameters. Extensive evaluations on BERT downstream tasks reveal that our proposed method beats baselines by providing the same performance with much smaller model size.
We also show the possibility of obtaining the extremely light-weight model by combining our solution with orthogonal methods such as DistilBERT.
Keywords:
Machine Learning: Deep Learning
Natural Language Processing: NLP Applications and Tools
Natural Language Processing: Text Classification