首页|基于深度学习的轻量级图像压缩算法

基于深度学习的轻量级图像压缩算法

扫码查看
基于深度学习的图像压缩算法变换部分存在结构复杂、计算量大的问题.为了加快其编码和解码的速度,提出了一种在尽可能保持原有压缩图像质量的情况下,使用知识蒸馏减少原网络参数量和乘-加运算计算量(multiply-accumulation operations,MACs)的方法.同时训练原网络和轻量化网络,通过将原网络的特征信息传递给轻量化网络,提升轻量化网络的性能.在轻量化网络的结构设计中,为了保留更多的特征信息,且尽可能地减少网络的参数量和MACs,在减少其通道数量的同时引入了分组卷积.在测试数据集Kodak以及DIV2K上的实验结果证明,相比于原网络,经过知识蒸馏的轻量化网络其参数量和MACs约为原来1/16,且仍然保持了较好的图像质量.
Lightweight image compression algorithm based on deep learning
The transformation modules of image compression algorithms based on deep learning involves complex architectures and large quantities of computation.To speed up the encoding and decoding process,a method was proposed to reduce the number of param-eters and multiply-accumulation operations(MACs)of the original network with knowledge distillation while maintaining the image quality as much as possible.The original and the lightweight networks were trained simultaneously,and the lightweight network performance was improved by receiving feature information from the original network.When designing the lightweight network,group convolution was introduced to retain more feature informa-tion and reduce the number of parameters and MACs of the network as much as possible,while the number of channels was reduced.Experiments on the test datasets Kodak and DIV2K showed that,compared with the original network,the lightweight network after knowledge distillation still maintained good image quality while the amount of parameters and MACs was approximately one-sixteenth that of the original network.

image compressiondeep learningknowledge distillation

范沈伟、李国平、王国中

展开 >

上海工程技术大学电子电气工程学院,上海 201620

图像压缩 深度学习 知识蒸馏

国家重点研发计划资助项目

2019YFB1802700

2024

上海大学学报(自然科学版)
上海大学

上海大学学报(自然科学版)

CSTPCD北大核心
影响因子:0.579
ISSN:1007-2861
年,卷(期):2024.30(3)
  • 3