Convolutional Neural Network Pruning Based on Channel Similarity Entropy
Convolutional Neural Network(CNN)contain a large number of filters,which occupy significant memory resources for training and storage.Pruning filters is an effective method to reduce the scale of networks,free up memory,and enhance computing speed.A primary issue with existing filter pruning methods is that they calculate the weights of filters in isolation,preserving the filter with the largest weight while often overlooking the importance of smaller weights in feature extraction.To address this,a Filter Entropy Calculation(FEC)method based on channel similarity is proposed.This method involves compressing the weight tensor by its mean value,a process whose rationality is substantiated by the structural characteristics of the filter.The similarity between channels is assessed by calculating the distance of filter channels,and filter entropy is determined based on this similarity.Filters are then ranked by their entropies,and a specific proportion of filters with lower entropy is removed.The experimental design uses different pruning ratios for different convolutional layers.Networks such as VGG-16 and ResNet-34 are pruned using the CIFAR10 and ImageNet standard datasets.Experimental results indicate that while the original accuracy is largely preserved,the number of parameters is reduced by approximately 94%and 70%,respectively.Additionally,on the Single Shot multibox Detector(SSD)framework,parameters decreased by 55.72%,and the mean Average Precision(mAP)improved by 1.04 percentage points.