Remote sensing scene classification model based on improved ShuffleNetV2 network
In response to the challenges posed by traditional remote sensing scene classification models,which are characterized by a large number of parameters requiring substantial computational resources,and issues related to uneven feature recognition leading to low classification accuracy,this study proposes a remote sensing image classification method based on an improved ShuffleNetV2 network and knowledge distillation.To address the difficulty in uniformly extracting subtle features in remote sensing scenes at long distances and high altitudes,we introduce the CBAM channel space attention mechanism.Furthermore,we modify the basic stacking unit of ShuffleNetV2 to be lightweight.Finally,employing transfer learning and knowledge distillation techniques,we load a pre-trained model with ResNet101 as the teacher network and the enhanced ShuffleNetV2 as the student network to enhance remote sensing image classification accuracy.Experimental results demonstrate that the improved ShuffleNetV2 reduces parameter count by 28%while increasing accuracy from 91.8%to 94.8%.Compared with lightweight models such as MobileNetV3 and MobileViT,our approach achieves improvements of 4.2%and 4.5%,respectively.Importantly,our enhanced model maintains high classification accuracy while occupying less storage space.
deep learningimage classificationattention mechanismlightweight neural networkknowledge distillation