Image super-resolution network based on multi-scale adaptive attention
Aiming at the problem that most image super-resolution methods cannot fully extract features by using single-scale convolution,an image super-resolution network based on multi-scale adaptive atten-tion is proposed.To fully use the contextual information in each hierarchical feature,a multi-scale feature fusion block was designed,whose basic unit consists of an adaptive dual-scale block,a multi-path progres-sive interactive block,and an adaptive dual-dimensional attention sequentially in series.Firstly,the adap-tive dual-scale block autonomously fused the features of two scales to obtain richer contextual features;sec-ondly,the multi-path progressive interactive block interacted the output of the adaptive dual-scale block in a progressive way to improve the correlation between the contextual features;lastly,the adaptive dual-di-mensional attention autonomously selected different dimensions of the attention to refine the output fea-tures,which makes the output features more discriminative.The experimental results show that on Set5,Set14,BSD100 and Urban100 test sets,the method of this paper improves the PSNR and SSIM quantita-tive metrics compared to other mainstream methods,especially for the Urban100 test set,where texture details are difficult to be recovered,the method of this paper improves PSNR and SSIM metrics by 0.05 dB and 0.004 5 respectively compared to the existing optimal method,SwinIR,with the scaling fac-tor of×4;in terms of visual effect,the reconstructed images in this paper have more texture details.
super-resolutionmulti-scale featureattention mechanismadaptive weightsprogressive information interaction