闽江学院学报2024,Vol.45Issue(5) :42-50.DOI:10.19724/j.cnki.jmju.2024.05.005

基于改进DINO的联合蒸馏图像分类

Co-Distillation Image Classification Based on Improved DINO

尹威 林贵敏
闽江学院学报2024,Vol.45Issue(5) :42-50.DOI:10.19724/j.cnki.jmju.2024.05.005

基于改进DINO的联合蒸馏图像分类

Co-Distillation Image Classification Based on Improved DINO

尹威 1林贵敏2
扫码查看

作者信息

  • 1. 闽江学院物理与电子信息工程学院,福建 福州 350108;福建师范大学光电与信息工程学院,福建 福州 350117
  • 2. 闽江学院物理与电子信息工程学院,福建 福州 350108
  • 折叠

摘要

DINO(self-distillation with no label)首次将自监督学习与transformer结合.为结合卷积网络的局部性优势,提出一个三分支的网络模型DINO+,即在DINO的基础上添加一个卷积蒸馏模块,给DINO中的transformer进行知识蒸馏,从而将卷积网络与transformer结合.蒸馏后的ViT(vision transformer)在STL-10、CIFAR-10 数据集上的分类准确率分别上升了 5.7%、4.8%,且优于其他自监督模型,证明了提出方法的有效性.

Abstract

DINO(self-distillation with no label)combines self-supervised learning with transformer for the first time.In order to combine the locality advantage of convolutional network,a three-branch network model named DINO+is proposed,which adds a convolution distillation module to DINO to dis-till knowledge to transformer in DINO,thus combining the convolutional network with transformer.The classification accuracy of the distilled ViT(vision transformer)on STL-10 and CIFAR-10 increases by 5.7%and 4.8%respectively,and is better than other self-supervised models,demonstrating the ef-fectiveness of the proposed method.

关键词

DINO/自监督学习/卷积蒸馏模块/知识蒸馏

Key words

DINO/self-supervised learning/convolution distillation module/knowledge distillation

引用本文复制引用

出版年

2024
闽江学院学报
闽江学院

闽江学院学报

CHSSCD
影响因子:0.221
ISSN:1009-7821
段落导航相关论文