首页|基于自监督对比学习与方面级情感分析的联合微调模型

基于自监督对比学习与方面级情感分析的联合微调模型

扫码查看
方面级情感分析是自然语言处理领域中一项具有挑战性的细粒度情感分析任务.以微调预训练语言模型的方式广泛应用于方面级情感分析任务,并取得了明显的效果提升.然而,现有多数研究设计的下游结构较为复杂,甚至与预训练模型部分隐藏层结构重合,从而限制了整体模型性能.由于对比学习方法有助于改善预训练语言模型在词语级别和句子级别的表示,设计了一种结合自监督对比学习与方面级情感分析的联合微调模型(self-supervised contrastive learning aspect-based sentiment analysis,SSCL-ABSA).该模型以简洁的下游结构联合两种学习任务,实现从不同角度微调预训练基于Transformer的双向编码器(bidirectional encoder representations from Transformers,BERT)模型,有效促进了方面级情感分析效果的提升.具体地,首先在BERT编码阶段,将评论文本与方面词拼接成两个片段输入BERT编码器,得到各词特征表示.之后根据下游结构需求,对不同的词特征采用池化操作.一方面池化所有词特征用于方面级情感分析,另一方面池化两个片段的方面词特征用于自监督对比学习.最终结合两种任务以联合学习的方式微调BERT编码器.在3个公开数据集上进行实验评估,结果表明SSCL-ABSA方法优于其他同类对比方法.借助t-分布随机近邻嵌入(t-distributed stochastic neighbor embedding,t-SNE)方法,形象地可视化了SSCL-ABSA有效改善了BERT模型的实体表示效果.
Joint Fine-tuning Model Based on Self-supervised Contrastive Learning Aspect-based Sentiment Analysis
The way of fine-tuning pre-trained models to complete aspect-based sentiment analysis tasks has been widely used and has achieved significant improvement.However,most of the existing studies use complex downstream structures,and even coincide with some hidden layer structures of pre-trained models,which limits the overall model performance.Since the contrastive learning helps to improve the representation of pre-trained models at the word level and sentence level,a joint fine-tuning framework combining self-supervised contrastive learning aspect-based sentiment analysis(SSCL-ABSA)was designed.The framework combines two learning tasks with a concise downstream structure to fine-tune the pre-trained bidirectional encoder representations from Transformers(BERT)model from different angles,which effectively promotes the improvement of the effect of aspect-level sentiment analysis.Specifically,two segments of text and aspect words were spliced and entered into the BERT encoder as samples.After encoding,pooling operations were adopted for the different word representations according to the downstream structure requirements.On the one hand,pooling all word representations was used for aspect-level sentiment analysis,and on the other hand,pooling of aspect word representations of two segments was used for self-supervised comparative learning.Finally,the two tasks were combined to fine-tune the BERT encoder in a joint learning manner.Experimental evaluation on three publicly available datasets shows that the SSCL-ABSA method is superior to other similar comparison methods.With the help of the t-distributed stochastic neighbor embedding(t-SNE)method,SSCL-ABSA is visualized to effectively improve the entity representation effect of the BERT model.

aspect-based sentiment analysisself-supervised contrastive learningpre-trained modelBERT encoderjoint fine-tuning

狄广义、陈见飞、杨世军、高军、王耀坤、余本功

展开 >

国能数智科技开发(北京)有限公司,北京 100011

合肥工业大学管理学院,合肥 230009

方面级情感分析 自监督对比学习 预训练语言模型 BERT编码器 联合微调

国家自然科学基金

71671057

2024

科学技术与工程
中国技术经济学会

科学技术与工程

CSTPCD北大核心
影响因子:0.338
ISSN:1671-1815
年,卷(期):2024.24(21)