Lightweight image super-resolution combining residual learning and layer attention
Convolutional neural networks(CNNs)have shown great performance in image super-resolution(SISR)problems.However,most super-resolution studies use complex layer connection strategies to improve feature utilization,which makes the depth and the number of parameters of the network increase continuously,and makes it hard to deploy on mobile terminals.Aiming at this problem,a lightweight image super-resolution network combining residual learning and layer attention is proposed to extract and aggregate important features more efficiently.Firstly,a 3×3 convolutional layer is used for shallow feature extraction.In the nonlinear mapping part,the improved local residual feature blocks(RLFB)are stacked for local feature learning,and the layer attention module(LAM)is introduced to further improve the effect of feature aggregation by using the hierarchical features on the residual branch.Finally,the pixel attention reconstruction block(PARB)is used for image reconstruction to improve the reconstruction quality with a small parameter cost.Compared with the NTIRE 2022 champion RLFN,RLAN finally achieves superior performance with only 373k parameters,and the average PSNR and SSIM on the four datasets are improved by 0.35 dB and 0.001 4,respectively.The comprehensive experiments demonstrate that RLAN can accurately restore SR images and effectively reduce the artifacts at the edges.