首页|神经网络模型轻量化方法综述

神经网络模型轻量化方法综述

扫码查看
近年来,神经网络模型凭借着较强的特征提取能力在各行各业的应用越来越广泛,并取得了不错的效果.然而,随着数据量的不断增大以及人们对高准确率的不断追求,神经网络模型的参数规模急剧增大,网络复杂度不断提高,导致计算、存储等资源开销不断扩大,使其在资源受限场景下的部署面临极大挑战.因此,如何在不影响模型性能的前提下实现模型轻量化,进而降低模型训练和部署的成本成为当前的研究热点之一.为此,文中从复杂模型压缩以及轻量化模型设计两方面入手,对当前典型的模型轻量化方法进行总结和分析,以期厘清模型压缩技术的发展脉络.其中,复杂模型压缩技术从模型剪枝、模型量化、低秩分解、知识蒸馏及混合方式5方面进行归纳,而轻量化模型设计则从空间卷积设计、移位卷积设计和NAS架构搜索3方面进行梳理.
Lightweighting Methods for Neural Network Models:A Review
In recent years,with its strong feature extraction capability,neural network models have been more and more widely used in various industries and have achieved good results.However,with the increasing amount of data and the pursuit of high ac-curacy,the parameter size and network complexity of neural network models increase dramatically,leading to the expansion of computation,storage and other resource overheads,making their deployment in resource-constrained scenarios extremely chal-lenging.Therefore,how to achieve model lightweighting without affecting model performance,and thus reduce model training and deployment costs,has become one of the current research hotspots.This paper summarizes and analyzes the typical model light-weighting methods from two aspects:complex model compression and lightweight model design,so as to clarify the development of model compression technology.The complex model compression techniques are summarized in five aspects:model pruning,model quantization,low-rank decomposition,knowledge distillation and hybrid approach,while the lightweight model design is sorted out in three aspects:spatial convolution design,shifted convolution design and neural architecture search.

Neural networksModel compressionModel pruningModel quantizationModel lightweight

高杨、曹仰杰、段鹏松

展开 >

郑州大学网络空间安全学院 郑州 450000

神经网络 模型压缩 模型剪枝 模型量化 模型轻量化

郑州市协同创新重大专项河南省高等学校重点科研项目中国工程科技发展战略河南研究院战略咨询研究项目河南省科技攻关计划

20XTZX0601321A5200432022HENYB03232102210050

2024

计算机科学
重庆西南信息有限公司(原科技部西南信息中心)

计算机科学

CSTPCD北大核心
影响因子:0.944
ISSN:1002-137X
年,卷(期):2024.51(z1)
  • 66