Image Super-Resolution Reconstruction Based on Partial Separation and Multiscale Fusion
Currently,deep-learning-based super-resolution reconstruction networks suffer from issues such as convolution operation redundancy,incomplete image reconstruction information,and large model parameters that limit their applicability to edge devices.To address these issues,this study proposes a lightweight image super-resolution reconstruction network based on partial separation and multiscale fusion.This network utilizes partial convolutions for feature extraction and separates partial image channels to reduce redundant computations while maintaining the quality of the image reconstruction.At the same time,a multiscale feature fusion module is designed to learn long-range dependency features and capture spatial features in the spatial dimension using a channel attention enhancement group.This reduces the loss of image reconstruction information and effectively restores the details and textures of the image.Finally,because the multiscale feature fusion block focuses on global feature extraction and fusion,an efficient inverted residual block is constructed to supplement the ability to extract local contextual information.The network is tested on five benchmark datasets:Set 5,Set 14,B 100,Urban 100,and Manga 109,with scale factors of 2,3,and 4 times.The parameters of the network are 373 000,382 0000,and 394 000,and the FLOPs are 84.0×109,38.1×109,and 22.1×109,respectively.Quantitative and qualitative experimental results show that compared with networks such as VDSR,IMDN,RFDN,and RLFN,the proposed network ensures image reconstruction quality with fewer network parameters.