Image Super-Resolution Reconstruction Method Based on Lightweight Symmetric CNN-Transformer
To address the issues of large parameter sizes and high training cost in existing image super-resolution reconstruction methods based on Transformer,an image super-resolution reconstruction method based on lightweight symmetric CNN-Transformer is proposed.Firstly,a symmetric CNN-Transformer block is designed using weight sharing,and the information extracted from the upper and lower branches is fully integrated through channel attention block to improve the ability of the network to capture and utilize both local and global features.Meanwhile,based on the depthwise separable convolution and the calculation of the self-attention cross-channel covariance matrix,the number of parameters in Transformer is effectively decreased,as well as calculation cost and memory consumption.Secondly,a high-frequency enhancement residual block is introduced into the network to further focus on the texture and detail information in the high-frequency area.Finally,the selection of the best activation function for generating the self-attention in Transformer is explored.Experimental analysis demonstrates that GELU function can better promote feature aggregation and improve network performance.Experimental results show that the proposed method effectively reconstructs richer textures and details of the image while maintaining the lightweight of the network.