Research on Single Image Super-resolution Based on Multi-scale Attention Feature Fusion
High resolution means that the image has a high pixel density,which can provide more details,which often play a key role in the application.Image super-resolution based on generative adversarial networks has attracted more and more attention in recent years due to its potential to generate rich details.Aiming at the problem that the existing network model ignores the learning of essential texture features from features and the limited receptive field,based on Real-ESRGAN and multi-scale attention feature fusion,the network is op-timized,and the residual-in-residual dense block is replaced by a large kernel decomposition and multi-scale learning.The method of combining the module with the dual branch structure of the global learning and down-sampling module proposes a single image super-resolution reconstruction algorithm based on multi-scale attention fusion,which enhances the interaction between each local and global token pair to form a richer and more informative representation.Super-resolution reconstruction experiments of 2,3,4 times were carried out on the data set.The reconstruction results were evaluated by peak signal-to-noise ratio(PSNR)and structural similarity(SSIM),and compared with SRCNN,SRGAN,EDSR,RDN,RCAN,HAN,ENLCA,MAN and Real-ESRGAN methods.The results show that the proposed algorithm is better than other models,and has better visual effect.