Lightweight feature distillation attention network for image super-resolution
In response to the limitations of existing image super-resolution algorithms,which often struggle with weak image detail recovery and have high computational costs due to large parameter sizes,we propose a lightweight residual feature distillation attention network(LRFDAN).First,a novel residual feature distillation block is designed to effectively extract features.Second,blueprint separable convolutions are uti-lized to replace standard convolutions,thereby reducing computational and memory demands.Finally,an at-tention mechanism is integrated into the model to further enhance reconstruction capabilities.The proposed model is validated on five benchmark datasets,and quantitative analyses along with visual comparisons demon-strate that,compared to other deep neural network models,our network significantly reduces parameters and computational cost while maintaining superior performance and subjective visual quality.These results under-score the effectiveness of the proposed model in terms of both image quality and computational efficiency.
deep learningsingle image super resolutionlightweightingdeep feature distillationat-tention mechanism