首页|结合模态表征学习的多模态情感分析

结合模态表征学习的多模态情感分析

扫码查看
在当前视频多模态情感分析研究中,存在着未充分考虑模态之间的动态独立性和模态融合缺乏信息流控制的问题.为解决这些问题,本文提出了一种结合模态表征学习的多模态情感分析模型.首先,通过使用BERT和LSTM分别挖掘文本、音频和视频的内在信息,其次,引入模态表征学习,以获得更具信息丰富性的单模态特征.在模态融合阶段,融合了门控机制,对传统的Transformer融合机制进行改进,以更精确地控制信息流.在公开数据集CMU-MOSI和CMU-MOSEI的实验结果表明,与传统模型相比,准确性和F1分数都有所提升,验证了模型的有效性.
Multimodal Sentiment Analysis Incorporating Modal Representation Learning
In the context of current multi-modal emotion analysis in videos,the influence of modality representation learning on modality fusion and final classification results has not been adequately considered.To this end,this study proposes a multi-modal emotion analysis model that integrates cross-modal representation learning.Firstly,the study utilizes Bert and LSTM to extract internal information from text,audio,and visual modalities separately,followed by cross-modal representation learning to obtain more information-rich unimodal features.In the modal fusion stage,the study fuses the gating mechanism and improves the traditional Transformer fusion mechanism to control the information flow more accurately.Experimental results on the publicly available CMU-MOSI and CMU-MOSEI datasets demonstrate that the accuracy and F1 score of this model are improved compared with the traditional models,validating the effective-ness of this model.

multimodal sentiment analysisrepresentation learningfeature fusiongating mechanismmulti-head attention mechanism

刘若尘、冯广、罗良语、林浩泽

展开 >

广东工业大学计算机学院,广州 510006

广东工业大学自动化学院,广州 510006

多模态情感分析 表征学习 特征融合 门控机制 多头注意力机制

国家自然科学基金广东省哲学社会科学青年项目

62237001GD23YJY08

2024

计算机系统应用
中国科学院软件研究所

计算机系统应用

CSTPCD
影响因子:0.449
ISSN:1003-3254
年,卷(期):2024.33(5)
  • 21