陕西理工大学学报(自然科学版)2024,Vol.40Issue(6) :56-64,81.

用于图像超分辨率的轻量化特征蒸馏注意力网络

Lightweight feature distillation attention network for image super-resolution

常开荣 孙俊 胡明志
陕西理工大学学报(自然科学版)2024,Vol.40Issue(6) :56-64,81.

用于图像超分辨率的轻量化特征蒸馏注意力网络

Lightweight feature distillation attention network for image super-resolution

常开荣 1孙俊 1胡明志1
扫码查看

作者信息

  • 1. 昆明理工大学 信息工程与自动化学院,云南 昆明 650504
  • 折叠

摘要

针对现有的图像超分辨率网络存在图像细节特征恢复能力较弱、参数量大、计算成本高的问题,提出了一种轻量化特征蒸馏注意力网络(LRFDAN).首先,设计了新颖的残差特征蒸馏模块进行有效特征提取;其次,利用蓝图可分离卷积替代标准卷积以减少计算和内存需求;最后,注意力机制被集成到模型中,进一步增强模型重构能力.所提出的模型在5 种基准测试数据集上进行性能验证,定量结果分析与视觉效果比较表明,与其他深度神经网络模型相比,LRFDAN在保持更好的性能和主观视觉效果的同时,大大减少了参数与计算量.进一步表明了所提出的模型在图像质量和计算效率方面的有效性.

Abstract

In response to the limitations of existing image super-resolution algorithms,which often struggle with weak image detail recovery and have high computational costs due to large parameter sizes,we propose a lightweight residual feature distillation attention network(LRFDAN).First,a novel residual feature distillation block is designed to effectively extract features.Second,blueprint separable convolutions are uti-lized to replace standard convolutions,thereby reducing computational and memory demands.Finally,an at-tention mechanism is integrated into the model to further enhance reconstruction capabilities.The proposed model is validated on five benchmark datasets,and quantitative analyses along with visual comparisons demon-strate that,compared to other deep neural network models,our network significantly reduces parameters and computational cost while maintaining superior performance and subjective visual quality.These results under-score the effectiveness of the proposed model in terms of both image quality and computational efficiency.

关键词

深度学习/单图像超分辨率重构/轻量化/深度特征蒸馏/注意力机制

Key words

deep learning/single image super resolution/lightweighting/deep feature distillation/at-tention mechanism

引用本文复制引用

出版年

2024
陕西理工大学学报(自然科学版)
陕西理工学院

陕西理工大学学报(自然科学版)

影响因子:0.425
ISSN:2096-3998
段落导航相关论文