首页|多尺度关键信息融合的轻量级图像超分辨重建

多尺度关键信息融合的轻量级图像超分辨重建

Lightweight image super-resolution reconstruction based on multi-scale key information fusion

扫码查看
针对基于卷积神经网络(convolutional neural network,CNN)的图像超分辨重建(super-reso-lution,SR)模型存在特征提取不充分、网络太深导致的参数量大以及冗余信息对网络最终重建性能影响等问题,设计了一种轻量级密集连接图像超分辨网络(lightweight densely connected image super-resolution network,LDCN).该网络设计了多尺度迭代特征提取模块(multi-scale iterative feature extraction module,MIFEM),实现在较低参数的情况下充分提取多尺度特征;根据残差收缩思想构建的关键信息提取模块(key information extraction module,KIEM),相较原始模块可以去除更多的冗余信息,使网络充分关注到关键信息且模块整体参数下降72%;最后,在密集残差网络中引入特征传输模块(feature transfer module,FTM),进一步降低模型复杂度,解决了模型层数深、参数大的问题.实验结果表明,LDCN在重建性能和视觉观感上均优于主流模型.4个测试集上,与轻量化模 型 MADNet 相 比,PSNR 分别提升 0.1 dB、0.11 dB、0.06dB、0.26 dB,参数量仅为 MADNet 的47.6%.
Aiming at the problems of the image super-resolution reconstruction(SR)model based on convolutional neural network(CNN),such as insufficient feature extraction,a large number of parameters caused by too deep network,and the impact of redundant information on the final reconstruction performance of the network,this paper designs a lightweight densely connected image super-resolution network(LDCN).The network designs a multi-scale iterative feature extraction module(MIFEM),to achieve full extraction of multi-scale features in the case of lower parameters;according to the idea of residual shrinkage,a key information extraction module(KIEM)is constructed,which can remove more redundant information than the original module,so that the network can fully pay attention to the key information and the overall parameters of the module are reduced by 72%;finally,the feature transfer module(FTM)is introduced into the dense residual network,which further reduces the complexity of the model and solves the problem of deep model layers and large parameters.Experimental results show that LDCN outperforms mainstream models in both reconstruction performances and visual perceptions.On the four test sets,compared with the lightweight model MADNet,the PSNR is increased by 0.1 dB,0.11 dB,0.06 dB,and 0.26 dB,respectively,and the number of parameters is only 47.6%of MADNet.

image super-resolutionconvolutional neural network(CNN)multi-scale feature extractionresidual shrinkage networkredundant informationdense residual connection

刘媛媛、程双全、朱路、邬雷

展开 >

华东交通大学信息工程学院,江西南昌 330013

图像超分辨率 卷积神经网络(CNN) 多尺度特征提取 残差收缩网络 冗余信息 密集残差连接

2024

光电子·激光
天津理工大学 中国光学学会

光电子·激光

北大核心
影响因子:1.437
ISSN:1005-0086
年,卷(期):2024.35(11)