为发挥遥感图像在国防军事、公共安全、环境监测等领域的重要作用,如何融合已配准的高分辨率全色图像与低分辨率多光谱图像的互补信息成为当前研究的重点.尽管近年来全色锐化方法已取得较大进步,但大多数方法仍受到以下限制:一方面,利用Wald协议退化生成不同尺寸图像时会造成信息损失;另一方面,受到网络结构和单一注意力的限制,无法同时利用全局和局部特征.为解决以上问题,本文提出了基于联合注意力的渐进式网络(Pan-sharpening based on multi-attention progressive network),称为 MAPNet.在该网络中,首先采用多阶段训练以减小尺寸变化带来的光谱和细节损失.其次设计联合注意力模块,将自注意力、空间注意力和通道注意力结合,实现对全局特征和局部特征、空间特征和通道特征的多模态分析,进一步提高MAPNet对纹理细节的保留能力.在高分二号卫星上进行大量对比实验和消融实验,定性和定量结果表明,本文方法融合效果优于其他10种方法,能够改善光谱失真和细节纹理丢失等问题.
Pan-sharpening based on multi-attention progressive network
Due to various physical and technical limitations,the radiation energy received by different sensors and the amount of data collected vary,and a single sensor cannot simultaneously obtain high spatial and spectral images.Therefore,it is necessary to develop an ideal application-oriented technique for generating multi-spectral image with high spatial resolution.The pan-sharpening method fuses the low spatial resolution multispectral image with the high spatial resolution panchromatic image to obtain a hyperspectral image with rich spatial spectral information.Although significant progress has been made in pan-sharpening methods in recent years,most methods still have two limitations:firstly,limited by network structure and single attention mechanism,global and local features cannot be used simulta-neously,resulting in loss of spatial information;secondly,using the Wald protocol to obtain high-resolution multispec-tral images leads to loss of spectral and detail information.To address these problems,this paper proposes a pan-sharp-ening framework MAPNet based on multiple attention progressive network.In order to extract more important informa-tion,we fully utilize the feature information contained in panchromatic and multispectral images to reduce the interfer-ence of redundant information.The low resolution and full resolution phases are closely linked using a progressive pat-tern.MAPNet trains the ability to extract global information,spectral information and gradient information to reduce the loss of spectrum and detail due to size changes.The multi-attention module combines self-attention,spatial atten-tion and channel attention to achieve multi-modal analysis of global features,local features,spatial features and chan-nel features,thereby further improving MAPNet's ability to retain texture details.The algorithm proposed in this paper is compared with the existing traditional methods BT-H,C-MTF-GLP-CBD,GS,BDSD,PRACS and deep learning methods MUCNN,MDCUN,Band-Aware,PNN and TFNet on the GF-2 dataset.Additionally,this paper records the performance of models with different stages and structures.Objective measurements include RMSE,RASE,SAM,ER-GAS,QAVE,SSIM,FSIM,QNR,Ds,Dλ.By combining subjective visual assessment with objective evaluation,the re-sults indicate that MAPNet fusion images retain more spectral and detail information.