为了实现遥感图像的超分辨率,解决目前超分辨率算法存在退化模型不符合遥感图像降质特性的问题,提出了一种基于生成对抗网络的遥感图像超分辨率方法。针对遥感图像成像的复杂性,建立新的遥感图像降质模型,有效改善了因图像先验和映射关系不符而无法有效提高网络性能的问题;同时将混合注意力机制引入残差网络,以增强生成器对于遥感图像的纹理细节恢复能力,并且改用U-Net结构判别器,使判别器满足更高的对抗网络判别需求。最后在UCAS-AOD数据集上验证并与四种主流方法对比,改进的网络在峰值信噪比(peak signal to noise ratio,PSNR)和结构相似性(structural similarity,SSIM)上较次好方法分别提升 2。2923dB/11。88%,证明所提算法的先进性。
Super-Resolution Reconstruction of Remote Sensing Images Based on Improved Generative Adversarial Networks
In order to realize the super-resolution of remote sensing images and solve the problem that the degra-dation model of current super-resolution algorithms does not conform to the degradation characteristics of remote sens-ing images,a super-resolution method of remote sensing images based on generative adversarial networks is proposed.Aiming at the complexity of remote sensing image imaging,a new degradation model of remote sensing image is estab-lished,which effectively improves the problem that the network performance cannot be effectively improved due to the inconsistency of image prior and mapping relationship;meanwhile,the hybrid attention mechanism is introduced into the residual network to enhance the ability of the generator part to recover texture details of remote sensing images,and the U-Net structure discriminator is used instead to make the discriminator meet the higher adversarial network discriminative requirements.Finally,the improved network is validated on the UCAS-AOD dataset and compared with the four mainstream methods,and the peak signal-to-noise ratio(PSNR)and structural similarity(SSIM)are im-proved by 2.2923 dB/11.88%,respectively,compared with the next best method,which proves the proposed algorithm is advanced.