首页|SCWC: Structured channel weight sharing to compress convolutional neural networks

SCWC: Structured channel weight sharing to compress convolutional neural networks

扫码查看
Convolutional neural networks (CNNs) have surpassed humans in many computer vision areas. However, the redundancy of CNNs inhibits its application in the embedded device. In this paper, a simple shared channel weight convolution (SCWC) approach is proposed to reduce the number of parameters. Multiplication is much more complex than accumulation, so reducing multiplication is more significant than reducing accumulation in CNNs. The proposed SCWC can reduce the number of multiplications by the distributive property due to the structured channel parameter sharing. Furthermore, the fully trained CNN can be directly compressed without tedious training from scratch. To evaluate the performance of the proposed SCWC, five competitive benchmark datasets for image classification and object detection, CIFAR-10, CIFAR-100, ImageNet, CUB-200-2011 and PASCAL VOC are adopted. Experiments demonstrate that the proposed SCWC can reduce about 50% of parameters and multiplications for ResNet50 with only 0.13% accuracy loss, and reduce about 70% of parameters and multiplications for VGG16 with 1.64% accuracy loss on ImageNet, which is better than many other pruning methods for CNNs. The accuracy loss is very small because of the soft parameter sharing. Moreover, the proposed SCWC is also effective for object detection and fine-grained image classification tasks.(c) 2021 Elsevier Inc. All rights reserved.

Convolutional neural networkImage classificationNetwork compressionKernel sharing

Li, Guoqing、Zhang, Meng、Wang, Jiuyang、Weng, Dongpeng、Corporaal, Henk

展开 >

Southeast Univ

Eindhoven Univ Technol

2022

Information Sciences

Information Sciences

EISCI
ISSN:0020-0255
年,卷(期):2022.587
  • 8
  • 49