首页|基于混合知识分解的增强残差网络

基于混合知识分解的增强残差网络

扫码查看
当前如刺激性训练、组知识训练等方法收集残差网络中浅层网络的组知识进行自蒸馏,可提升网络性能,然而上述方法获取的组知识面临知识更新较慢、难以与数据混合技术结合等问题.为了解决此问题,文中提出基于混合知识分解的增强残差网络,通过最小化分解误差,将混合知识分解建模为二次规划问题,从而能从混合知识中获取高质量的组知识.为了提升知识的鲁棒性与多样性,结合多种数据混合技术,构建复合数据混合技术.不同于效率较低的高精度优化算法,采用简单高效的线性知识分解方法,将先前的组知识作为知识基,并将混合知识分解到知识基上,利用增强后的组知识蒸馏采样的子网.在多个主流的残差模型及图像分类数据集上的实验验证文中网络的有效性.
Enhanced Residual Networks via Mixed Knowledge Fraction
Methods such as stimulative training and group knowledge based training are employed to collect group knowledge from shallow subnets in residual networks for self-distillation,thereby enhancing network performance.However,the group knowledge acquired by these methods suffers from issues such as slow updating and difficulties in combining with DataMix techniques.To address these issues,enhanced residual networks via mixed knowledge fraction(MKF)are proposed.The mixed knowledge is decomposed and modeled as quadratic programming by minimizing the fraction loss,and thus high-quality group knowledge is obtained from the mixed knowledge.To improve the robustness and diversity of the knowledge,a compound DataMix technique is proposed to construct a composite data augmentation method.Different from high-precision optimization algorithms with poor efficiency,a simple and efficient linear knowledge fraction technique is designed.The previous group knowledge is taken as knowledge bases,and the mixed knowledge is decomposed based on the knowledge bases.The enhanced group knowledge is then adopted to distill sampled subnetworks.Experiments on mainstream residual networks and classification datasets verify the effectiveness of MKF.

Deep LearningNeural NetworkKnowledge DistillationNetwork EnhancementResidual Network

唐圣汲、叶鹏、林炜豪、陈涛

展开 >

复旦大学信息科学与工程学院 上海 200433

深度学习 神经网络 知识蒸馏 网络增强 残差网络

国家重点研发计划国家自然科学基金国家自然科学基金上海市自然科学基金上海市科技重大专项

2022ZD0160100620711276210113723ZR14029002021SHZDZX0103

2024

模式识别与人工智能
中国自动化学会,国家智能计算机研究开发中心,中国科学院合肥智能机械研究所

模式识别与人工智能

CSTPCD北大核心
影响因子:0.954
ISSN:1003-6059
年,卷(期):2024.37(4)
  • 51