首页|基于深度学习的直播弹幕情感多分类研究

基于深度学习的直播弹幕情感多分类研究

扫码查看
在网络直播场景下为提高弹幕分析的准确性与高效客观性,文章提出了一种结合MacBERT预训练语言模型与BIL-STM-CNN模型的弹幕情感多分类模型MacBERT-BILSTM-CNN,将情感按照乐、好、怒、愁、惊、恶和惧7种情感维度进行分类;同时考虑到颜文字和表情等情感符号所蕴含的内在信息对弹幕情感分析的影响,进行了颜文字和表情符号的替换.经过对比实验,MacBERT-BILSTM-CNN 模型在相同数据集上的评价指标与CNN、BILSTM-CNN和 MacBERT模型相比都有不同程度的提升,表明了该模型在弹幕情感多分类任务中具有更好的效果;替换情感符号后相比与原始数据集的评价指标有一定提高,证明了充分考虑情感符号蕴含的内在信息能提升弹幕情感倾向判断的准确性.
Research on Multi classification of Live Streaming Bullet Screen Emotions Based on Deep Learning
In order to improve the accuracy and efficiency of barrage analysis in live streaming scenarios,this paper proposes a multi classification model for barrage emotions,MacBERT-BIL-STM-CNN,which combines MacBERT pre trained language model and BILSTM-CNN model.Emotions are classified into seven emotional dimensions:joy,good,anger,sorrow,shock,evil,and fear;At the same time,considering the influence of the inherent information contained in e-motional symbols such as facial expressions and emoticons on bullet screen sentiment analysis,the replacement of facial expressions and emoticons was carried out.After comparative experi-ments,the evaluation metrics of the MacBERT-BILSTM-CNN model have been improved to varying degrees compared to CNN,BILSTM-CNN,and MacBERT models on the same dataset,indicating that the model has better performance in bullet emotion multi classification tasks;Compared with the original dataset,there is a certain improvement in the evaluation indicators after replacing emotional symbols,which proves that fully considering the intrinsic information contained in emotional symbols can improve the accuracy of barrage emotion tendency judg-ment.

BarrageMulti classification of emotionsPre trained language modelYan scriptEmoticons

焦科元

展开 >

三峡大学计算机与信息学院,湖北宜昌 443000

弹幕 情感多分类 预训练语言模型 颜文字 表情符号

2024

长江信息通信
湖北通信服务公司

长江信息通信

影响因子:0.338
ISSN:2096-9759
年,卷(期):2024.37(5)
  • 14