Where to Prune: Using LSTM to Guide End-to-end Pruning
Where to Prune: Using LSTM to Guide End-to-end Pruning
Jing Zhong, Guiguang Ding, Yuchen Guo, Jungong Han, Bin Wang
Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence
Main track. Pages 3205-3211.
https://doi.org/10.24963/ijcai.2018/445
Recent years have witnessed the great success of convolutional neural networks (CNNs) in many related fields. However, its huge model size and computation complexity bring in difficulty when deploying CNNs in some scenarios, like embedded system with low computation power. To address this issue, many works have been proposed to prune filters in CNNs to reduce computation. However, they mainly focus on seeking which filters are unimportant in a layer and then prune filters layer by layer or globally. In this paper, we argue that the pruning order is also very significant for model pruning. We propose a novel approach to figure out which layers should be pruned in each step. First, we utilize a long short-term memory (LSTM) to learn the hierarchical characteristics of a network and generate a pruning decision for each layer, which is the main difference from previous works. Next, a channel-based method is adopted to evaluate the importance of filters in a to-be-pruned layer, followed by an accelerated recovery step. Experimental results demonstrate that our approach is capable of reducing 70.1% FLOPs for VGG and 47.5% for Resnet-56 with comparable accuracy. Also, the learning results seem to reveal the sensitivity of each network layer.
Keywords:
Machine Learning: Reinforcement Learning
Machine Learning: Deep Learning
Computer Vision: Computer Vision