当前方面级情感分析方法大多通过依赖树和注意力机制提取情感特征,容易受上下文无关信息的噪声干扰,往往忽略对句子全局情感特征的建模,难以处理隐含表达情感的句子.为了解决该问题,文中提出基于对比学习的多视角特征融合方面级情感分析模型(Contrastive Learning Based Multi-view Feature Fusion Model for Aspect-Based Sentiment Analysis,CLMVFF).首先,使用图卷积网络编码依赖图、成分图和语义图中的信息,并在每个图中构建全局情感节点,学习全局情感特征,同时引入外部知识嵌入,丰富情感特征.然后,通过对比学习减少噪声的负面影响,并结合相似度分离增强情感特征.最后,融合依赖图表示、成分图表示、语义图表示和外部知识嵌入,得到多视角特征增强表示.在3个数据集上的实验表明,CLMVFF的性能取得一定提升.
Contrastive Learning Based Multi-view Feature Fusion Model for Aspect-Based Sentiment Analysis
Current aspect-based sentiment analysis methods typically extract sentiment features through dependency tree and attention mechanism.These methods are susceptible to noise from irrelevant contextual information and often neglect to model the global sentiment features of sentences,making it difficult to process sentences that implicitly express sentiment.To address these problems,a contrastive learning based multi-view feature fusion model for aspect-based sentiment analysis(CLMVFF)is proposed.First,graph convolutional networks are utilized to encode information in dependency graph,constituent graph and semantic graph.The global sentiment node is constructed in each graph to learn global sentiment features while introducing external knowledge embedding to enrich sentiment features.Second,contrastive learning is exploited to mitigate the negative influence of noise.Combined with the similarity separation,the sentiment features are enhanced.Finally,the dependency graph representation,constituent graph representation,semantic graph representation and external knowledge embeddings are fused.Experimental results on three datasets demonstrate that CLMVFF achieves the improvement of performance.