Semantic Contrastive Learning Algorithm Based On Attention Mechanism
Inappropriate data augmentation in contrastive learning may lead to distortion of semantic information,and there is a huge semantic gap in semantic information about the same image under different types of data augmentation.In addition,the Convolution-al Neural Network(CNN)has a strong preference for textures and cannot accurately learn the deep semantic feature representations required for downstream tasks.In response to the above issues,we propose a method—Semantic attention contrastive learning meth-od(SACL).SACL first utilizes convolutional neural networks to extract features,and then the attention module mines global fea-tures to obtain higher-level semantic features,achieving the supplementation of low-level features and semantic fusion of deep fea-tures.Secondly,the positive and negative sample pairs are constructed using completely different data augmentation methods,and the positive samples generated by weak enhancement(geometric augmentation)and the negative samples generated by strong en-hancement(texture augmentation)are compared to obtain the image input with more significant differences.Gridding augmented view increases the number of positive samples and accelerates network convergence speed.We verified the effectiveness of the pro-posed semantic contrastive learning algorithm on four datasets,and the results showed that the average accuracy of the ImageNet-100 dataset can reach 78.3%,which can effectively improve the classification accuracy of the model.