首页|ADC-CPANet:一种局部—全局特征融合的遥感图像分类方法

ADC-CPANet:一种局部—全局特征融合的遥感图像分类方法

扫码查看
遥感图像具有丰富的纹理信息和复杂的整体结构,因此在场景分类任务中进行多尺度的特征提取至关重要.基于此,设计了局部特征提取模块ADC模块ADC(Aggregation Depthwise Convolution Block)和全局—局部特征提取模块CPA模块CPA(Convolution Parallel Attention Block),并在ADC模块中提出一种非对称深度卷积组,以增强模型对图像翻转和旋转的鲁棒性;在CPA模块中提出一种能够扩大感受野并增强特征提取能力的多分组卷积头分解注意力.以ADC模块和CPA模块为基础构建全新的遥感图像场景分类模型ADC-CPANet,在各阶段采用堆叠ADC模块和CPA模块的策略,从而使模型具有更好的全局特征和局部特征提取能力.为验证ADC-CPANet的有效性,本文使用开源数据集RSSCN7数据集和SIRI-WHU数据集测试ADC-CPANet与其他深度学习网络的复杂度和识别能力.实验结果表明,ADC-CPANet的分类准确率分别高达96.43%和96.04%,优于其他先进的模型.
ADC-CPANet:A remote sensing image classification method based on local-global feature fusion
The rapid development of remote sensing technologies,such as satellites and unmanned aerial vehicles,has led to a surge in the amount and types of high-resolution remote sensing images.This advancement marks the onset of the"era of remote sensing big data."Compared with low-resolution ones,high-resolution remote sensing images provide richer texture,detailed information,and a more complex structure,making them crucial for applications like urban planning.However,images within the same category can vary substantially,whereas images from different categories may appear similar.Therefore,multi-scale feature extraction is important for remote sensing image scene classification.Current methods for remote sensing image scene classification can be divided into two categories according to the feature representation:those based on manual design features and those based on deep learning.Those based manual design features cover scale-invariant feature transformation and gradient scale histogram.They can achieve good results for simple classification tasks,but the feature information they extract may be incomplete or redundant,so the accuracy of classification in complex scenes remains low.By contrast,the methods based on deep learning have made incredible progress in scene classification owing to their powerful feature extraction ability.Compared with traditional methods,Convolution Neural Networks(CNNs)are commonly used in visual tasks,particularly those that involve more complex connections and diverse convolution forms.CNNs are effective at extracting local features,but they struggle with capturing long-distance dependencies among features.The Transformer architecture,which has recently been applied to computer vision,addresses this limitation through its self-attention layer that enables global feature extraction.Recent studies show that hybrid architectures combining CNNs and Transformers can utilize their advantages.This study proposes an Aggregation Depth-wise Convolution(ADC)module and a Convolution Parallel Attention(CPA)module.The ADC module effectively extracts local feature information and enhances the robustness of the model to image flipping and rotation.The CPA module integrates global and local feature extraction,with a multi-group convolution head decomposition designed to expand the receptive field and enhance feature extraction capacity.A remote sensing image scene classification model called ADC-CPANet is designed on the basis of two modules.The ADC and CPA modules are stacked at each stage of the model,improving its ability to extract global and local features.The effectiveness of ADC-CPANet is validated using the RSSCN7 and Google Image datasets.Experimental results demonstrate that ADC-CPANet achieves classification accuracies of 96.43%on the RSSCN7 dataset and 96.04%on the Google Image dataset,outperforming other advanced models.ADC-CPANet excels in extracting global and local features,achieving competitive scene classification accuracy.

remote sensing imagescene classificationconvolutional neural networkTransformerMulti-Gconv Head Decomposition AttentionADC-CPANet model

王威、李希杰、王新

展开 >

长沙理工大学计算机与通信工程学院,长沙 410114

遥感图像 场景分类 卷积神经网络 Transformer 多分组卷积头分解注意力 ADC-CPANet模型

2024

遥感学报
中国地理学会环境遥感分会 中国科学院遥感应用研究所

遥感学报

CSTPCD北大核心
影响因子:2.921
ISSN:1007-4619
年,卷(期):2024.28(10)