首页|融合注意力机制的解耦对比聚类

融合注意力机制的解耦对比聚类

扫码查看
为解决对比聚类正负样本之间负正耦合的问题,提出融合注意力机制的解耦对比聚类DCCI-AM方法。首先,使用数据增强手段将图像数据进行扩充得到正样本对和负样本对;其次,在骨干网络中加入卷积注意力模块CBAM使网络更加关注目标特征,并将扩充后的图像数据输入骨干网络得到特征;再次,将特征经过神经网络投影头,分别计算实例损失和聚类损失;最后,联合实例损失和聚类损失进行特征表示及聚类分配。为验证DCCIAM方法的有效性,在公共图像数据集CIFAR-10、STL-10和ImageNet-10上进行实验,聚类准确率分别达到了 80。2%,77。0%和90。4%。结果表明,融合注意力机制的解耦对比聚类方法在图像聚类任务中的性能表现突出。
A decoupled contrastive clustering integrating attention mechanism
To address the issue of negative-positive coupling between positive and negative samples in contrastive clustering,a decoupled contrastive clustering integrating attention mechanism(DCCIAM)is proposed.Firstly,data augmentation techniques are employed to expand the image data to obtain positive and negative sample pairs.Secondly,a convolutional block attention module(CBAM)is inte-grated into the backbone network to make the network pay more attention to target features.The expanded image data is then input into the backbone network to obtain a feature.Subsequently,the fea-turespassed through a neural network projection head to calculate instance loss and clustering loss sepa-rately.Finally,feature representation and cluster assignment are performed by combining the instance loss and clustering loss.To validate the effectiveness of the DCCIAM method,experiments are conduc-ted on public image datasets CIFAR-10,STL-10,and ImageNet-10,achieving clustering accuracies of 80.2%,77.0%,and 90.4%,respectively.The results demonstrate that the decoupled contrastive clus-tering method integrated with an attention mechanism performs well in image clustering.

contrastive learningdecoupled contrastive lossconvolutional attention moduleimage clusteringdata augmentation

刘合兵、孔玉杰、席磊、尚俊平

展开 >

河南农业大学信息与管理科学学院,河南郑州 450046

对比学习 解耦对比损失 卷积注意力模块 图像聚类 数据增强

2024

计算机工程与科学
国防科学技术大学计算机学院

计算机工程与科学

CSTPCD北大核心
影响因子:0.787
ISSN:1007-130X
年,卷(期):2024.46(12)