Research on Super Resolution Reconstruction of Remote Sensing Images Based on Multi-scale Convolutional Residuals
Aiming at the problems of existing remote sensing image super resolution(SR)reconstruction techniques in texture detail recovery,artifact suppression,and model convergence,an improved model based on a generative adversarial network,RS-SRGAN(Residual-SN-SimAM SRGAN)is proposed.Firstly,a dense residual convolution block(Residual-in-Residual Density Conv Block,RRDCB)with multi-scale convolution is applied to the generator for deep feature extraction to better recover the image details,while re-moving the batch normalization(BN)to improve the generalization ability of the model.Then,an adaptive normalization(SN)layer is introduced into the discriminator to replace the traditional BN layer,enabling the network to adaptively extract image features and accelerating the convergence of the model.Finally,by integrating the parameter-free SimAM attention mechanism,the discriminator is enhanced to capture and understand the key local details in the image,which effectively improves the discriminative ability of the model without additional parameters and further improves the quality of image generation.The experimental results show that compared with the original SRGAN,the improved model improves the peak signal-to-noise ratio(PSNR)by 1.075 4 dB and 0.349 2 dB on the UCMLU and NWPU datasets,respectively,and the structural similarity(SSIM)by 0.004 9 and 0.007 0,respectively.This study provides new research and application of the super-resolution of remote sensing images with a new perspective and technical basis.