兰州理工大学学报2024,Vol.50Issue(2) :87-95.

基于改进SE-Net和深度可分离残差的高光谱图像分类

Hyperspectral image classification based on improved SE-Net and depth-separable residuals

王燕 王振宇
兰州理工大学学报2024,Vol.50Issue(2) :87-95.

基于改进SE-Net和深度可分离残差的高光谱图像分类

Hyperspectral image classification based on improved SE-Net and depth-separable residuals

王燕 1王振宇1
扫码查看

作者信息

  • 1. 兰州理工大学计算机与通信学院,甘肃兰州 730050
  • 折叠

摘要

针对目前常见的用于高光谱图像分类的卷积神经网络参数数量多,训练时间长,对样本数量依赖性大的问题,提出一种适用于有限训练样本条件下基于改进压缩激活网络和深度可分离残差的分类网络MDSR&.SE-Net.首先使用主成分分析对原始高光谱图像进行通道降维,然后通过三维卷积神经网络连接多特征残差结构,同时嵌入改进的SE模块提取高光谱图像的空间和光谱细节特征,最后将提取到的特征数据输入Softmax分类器激活分类.为了使网络更加轻量,通过在残差结构中使用深度可分离卷积和引入全局平均池化减少参数数量.实验结果显示,使用有限训练样本在三种常见高光谱数据集上总体分类精度均达到99%以上.

Abstract

In response to the challenges posed by convolutional neural network(CNNs)commonly used for hyperspectral image classification,namely,their high parameter count,extended training times,and sensitivity to sample quantity,a classification network MDSR&SE-Net based on improved squeeze and ex-citation network and depth-separable residuals was proposed for limited training samples.First,the prin-cipal component analysis is employed in this model to reduce the dimension of the original HSI.Then,the multi-feature residual structure is connected by 3D convolutional neural network,and the spatial &spec-tral details of hyperspectral images are extracted by embedding the improved squeeze and excitation block.Finally,the extracted feature information is input into Softmax classifier to activate classification.To fur-ther lightweight the network,the number of parameters is reduced by using the depth separable convolu-tion in the residual structure and introducing global average pooling.Experimental results show that over-all accuracy of the three common hyperspectral data sets with the limited training samples are above 99%.

关键词

高光谱图像/深度可分离卷积/残差网络/压缩激活网络

Key words

hyperspectral image/depth separable convolution/residual network/SE Net

引用本文复制引用

基金项目

国家自然科学基金(61863025)

甘肃省重点研发计划(工业类)(18YF1GA060)

出版年

2024
兰州理工大学学报
兰州理工大学

兰州理工大学学报

CSTPCD北大核心
影响因子:0.57
ISSN:1673-5196
参考文献量3
段落导航相关论文