首页|基于改进DINO的联合蒸馏图像分类

基于改进DINO的联合蒸馏图像分类

扫码查看
DINO(self-distillation with no label)首次将自监督学习与transformer结合.为结合卷积网络的局部性优势,提出一个三分支的网络模型DINO+,即在DINO的基础上添加一个卷积蒸馏模块,给DINO中的transformer进行知识蒸馏,从而将卷积网络与transformer结合.蒸馏后的ViT(vision transformer)在STL-10、CIFAR-10 数据集上的分类准确率分别上升了 5.7%、4.8%,且优于其他自监督模型,证明了提出方法的有效性.
Co-Distillation Image Classification Based on Improved DINO
DINO(self-distillation with no label)combines self-supervised learning with transformer for the first time.In order to combine the locality advantage of convolutional network,a three-branch network model named DINO+is proposed,which adds a convolution distillation module to DINO to dis-till knowledge to transformer in DINO,thus combining the convolutional network with transformer.The classification accuracy of the distilled ViT(vision transformer)on STL-10 and CIFAR-10 increases by 5.7%and 4.8%respectively,and is better than other self-supervised models,demonstrating the ef-fectiveness of the proposed method.

DINOself-supervised learningconvolution distillation moduleknowledge distillation

尹威、林贵敏

展开 >

闽江学院物理与电子信息工程学院,福建 福州 350108

福建师范大学光电与信息工程学院,福建 福州 350117

DINO 自监督学习 卷积蒸馏模块 知识蒸馏

2024

闽江学院学报
闽江学院

闽江学院学报

CHSSCD
影响因子:0.221
ISSN:1009-7821
年,卷(期):2024.45(5)