首页|融合BERT与对比学习的文本情感多分类模型研究

融合BERT与对比学习的文本情感多分类模型研究

扫码查看
为了更精准地识别大众的情感,对多分类的文本情感分析问题进行研究,同时为了解决多分类文本情感因数据分布不均衡而产生的长尾问题,提高情感分类效果,提出了一种融合BERT与对比学习的文本情感多分类模型,先使用BERT模型生成文本嵌入表示向量,再输入到投影层,进一步捕捉重要特征和降维,通过将同一类别的情感文本当作正样本,不同类别的情感文本当作负样本,模型进行正负样本间对比学习,最后模型能学习到有关长尾情感类别更深层的特征表示.在SMP2020微博情感分类公开数据集上进行性能测试,实验结果表明,该模型在两个数据集上的准确率分别达到85%和86.4%,与一些传统模型相比具有更好的分类性能.
A study of text sentiment multi-classification model integrating BERT and contrastive learning
To more accurately identify the emotions of the masses,this study investigates the problem of multi-class text senti-ment analysis.Additionally,to address the long-tail issue arising from the imbalanced data distribution in multi-class text senti-ment analysis and enhance sentiment classification performance,a text sentiment multi-classification model integrating BERT and contrastive learning is proposed.Initially,the BERT model is employed to generate text embedding representation vectors,which are then fed into a projection layer to further capture essential features and reduce dimensionality.By treating sentiment texts of the same category as positive samples and sentiment texts of different categories as negative samples,the model performs contrastive learning between positive and negative samples.Ultimately,the model learns deeper feature representations related to long-tail sen-timent categories.Performance testing is conducted on the SMP2020 Weibo Sentiment Classification public dataset,and experimen-tal results demonstrate that the model achieves accuracy rates of 85%and 86.4%on the two datasets,respectively,outperforming some traditional models in classification performance.

text sentiment classificationBERT modelcontrastive learning

高若军、艾丹祥、梁渊雅

展开 >

广东工业大学管理学院,广州 510520

文本情感分类 BERT模型 对比学习

2024

现代计算机
中大控股

现代计算机

影响因子:0.292
ISSN:1007-1423
年,卷(期):2024.30(23)