Text Classification Method Based on Contrastive Learning and Attention Mechanism
Text classification is a basic task in the field of natural language processing and plays an important role in information retrieval,machine translation,sentiment analysis,and other applications.However,most deep learning models do not fully consider the rich information in training instances during inference,resulting in inadequate text feature learning.To leverage training instance information fully,this paper proposes a text classification method based on contrastive learning and attention mechanism.First,a supervised contrastive learning training strategy is designed to optimize the retrieval of text vector representations,thereby improving the quality of the retrieved training instances during the inference process.Second,an attention mechanism is constructed to learn the attention distribution of the obtained training text features,focusing on adjacent instance information with stronger relevance and capturing more implicit similarity features.Finally,the attention mechanism is combined with the model network,fusing information from adjacent training instances to enhance the ability of the model to extract diverse features and achieve global and local feature extraction.The experimental results demonstrate that this method achieves significant improvements on various models,including Convolutional Neural Network(CNN),Bidirectional Long Short-Term Memory(BiLSTM),Graph Convolutional Network(GCN),Bidirectional Encoder Representations from Transformers(BERT),and RoBERTa.For the CNN model,the macro F1 value is increased by 4.15,6.2,and 1.92 percentage points for the THUCNews,Toutiao,and Sogou datasets,respectively.Therefore,this method provides an effective solution for text classification tasks.
text classificationdeep modelcontrastive learningapproximate nearest neighbor algorithmattention mechanism