计算机系统应用2024,Vol.33Issue(5) :280-287.DOI:10.15888/j.cnki.csa.009492

结合模态表征学习的多模态情感分析

Multimodal Sentiment Analysis Incorporating Modal Representation Learning

刘若尘 冯广 罗良语 林浩泽
计算机系统应用2024,Vol.33Issue(5) :280-287.DOI:10.15888/j.cnki.csa.009492

结合模态表征学习的多模态情感分析

Multimodal Sentiment Analysis Incorporating Modal Representation Learning

刘若尘 1冯广 2罗良语 1林浩泽2
扫码查看

作者信息

  • 1. 广东工业大学计算机学院,广州 510006
  • 2. 广东工业大学自动化学院,广州 510006
  • 折叠

摘要

在当前视频多模态情感分析研究中,存在着未充分考虑模态之间的动态独立性和模态融合缺乏信息流控制的问题.为解决这些问题,本文提出了一种结合模态表征学习的多模态情感分析模型.首先,通过使用BERT和LSTM分别挖掘文本、音频和视频的内在信息,其次,引入模态表征学习,以获得更具信息丰富性的单模态特征.在模态融合阶段,融合了门控机制,对传统的Transformer融合机制进行改进,以更精确地控制信息流.在公开数据集CMU-MOSI和CMU-MOSEI的实验结果表明,与传统模型相比,准确性和F1分数都有所提升,验证了模型的有效性.

Abstract

In the context of current multi-modal emotion analysis in videos,the influence of modality representation learning on modality fusion and final classification results has not been adequately considered.To this end,this study proposes a multi-modal emotion analysis model that integrates cross-modal representation learning.Firstly,the study utilizes Bert and LSTM to extract internal information from text,audio,and visual modalities separately,followed by cross-modal representation learning to obtain more information-rich unimodal features.In the modal fusion stage,the study fuses the gating mechanism and improves the traditional Transformer fusion mechanism to control the information flow more accurately.Experimental results on the publicly available CMU-MOSI and CMU-MOSEI datasets demonstrate that the accuracy and F1 score of this model are improved compared with the traditional models,validating the effective-ness of this model.

关键词

多模态情感分析/表征学习/特征融合/门控机制/多头注意力机制

Key words

multimodal sentiment analysis/representation learning/feature fusion/gating mechanism/multi-head attention mechanism

引用本文复制引用

基金项目

国家自然科学基金(62237001)

广东省哲学社会科学青年项目(GD23YJY08)

出版年

2024
计算机系统应用
中国科学院软件研究所

计算机系统应用

CSTPCD
影响因子:0.449
ISSN:1003-3254
参考文献量21
段落导航相关论文