Neural Networks2022,Vol.15212.DOI:10.1016/j.neunet.2022.05.002

LAP: Latency-aware automated pruning with dynamic-based filter selection

Yang, Wangdong Li, Kenli Li, Keqin Liu, Chubo Chen, Zailong
Neural Networks2022,Vol.15212.DOI:10.1016/j.neunet.2022.05.002

LAP: Latency-aware automated pruning with dynamic-based filter selection

Yang, Wangdong 1Li, Kenli 1Li, Keqin 2Liu, Chubo 1Chen, Zailong1
扫码查看

作者信息

  • 1. Coll Informat Sci & Engn,Hunan Univ
  • 2. Dept Comp Sci,SUNY Coll New Paltz
  • 折叠

Abstract

Model pruning is widely used to compress and accelerate convolutional neural networks (CNNs). Conventional pruning techniques only focus on how to remove more parameters while ensuring model accuracy. This work not only covers the optimization of model accuracy, but also optimizes the model latency during pruning. When there are multiple optimization objectives, the difficulty of algorithm design increases exponentially. So latency sensitivity is proposed to effectively guide the determination of layer sparsity in this paper. We present the latency-aware automated pruning (LAP) framework which leverages the reinforcement learning to automatically determine the layer sparsity. Latency sensitivity is used as a prior knowledge and involved into the exploration loop. Rather than relying on a single reward signal such as validation accuracy or floating-point operations (FLOPs), our agent receives the feedback on the accuracy error and latency sensitivity. We also provide a novel filter selection algorithm to accurately distinguish important filters within a layer based on their dynamic changes. Compared to the state-of-the-art compression policies, our framework demonstrated superior performances for VGGNet, ResNet, and MobileNet on CIFAR-10, ImageNet, and Food-101. Our LAP allowed the inference latency of MobileNet-V1 to achieve approximately 1.64 times speedup on the Titan RTX GPU, with no loss of ImageNet Top-1 accuracy. It significantly improved the pareto optimal curve on the accuracy and latency trade-off. (C) 2022 Elsevier Ltd. All rights reserved.

Key words

AutoML/Channel pruning/Model compression and acceleration/Reinforcement learning

引用本文复制引用

出版年

2022
Neural Networks

Neural Networks

EISCI
ISSN:0893-6080
被引量4
参考文献量50
段落导航相关论文