Joint Fine-tuning Model Based on Self-supervised Contrastive Learning Aspect-based Sentiment Analysis
The way of fine-tuning pre-trained models to complete aspect-based sentiment analysis tasks has been widely used and has achieved significant improvement.However,most of the existing studies use complex downstream structures,and even coincide with some hidden layer structures of pre-trained models,which limits the overall model performance.Since the contrastive learning helps to improve the representation of pre-trained models at the word level and sentence level,a joint fine-tuning framework combining self-supervised contrastive learning aspect-based sentiment analysis(SSCL-ABSA)was designed.The framework combines two learning tasks with a concise downstream structure to fine-tune the pre-trained bidirectional encoder representations from Transformers(BERT)model from different angles,which effectively promotes the improvement of the effect of aspect-level sentiment analysis.Specifically,two segments of text and aspect words were spliced and entered into the BERT encoder as samples.After encoding,pooling operations were adopted for the different word representations according to the downstream structure requirements.On the one hand,pooling all word representations was used for aspect-level sentiment analysis,and on the other hand,pooling of aspect word representations of two segments was used for self-supervised comparative learning.Finally,the two tasks were combined to fine-tune the BERT encoder in a joint learning manner.Experimental evaluation on three publicly available datasets shows that the SSCL-ABSA method is superior to other similar comparison methods.With the help of the t-distributed stochastic neighbor embedding(t-SNE)method,SSCL-ABSA is visualized to effectively improve the entity representation effect of the BERT model.