首页|基于联合隐式特征的多模态情感分析模型

基于联合隐式特征的多模态情感分析模型

扫码查看
图文多模态情感分析的任务中,大部分研究专注于图像文本对的显示特征提取而忽略了多模态数据中高层次的隐式语义级特征关联,针对这一问题提出基于联合隐式特征的多模态情感分析模型.在使用RoBERTa和VGG16模型构建显式特征提取模块的同时引入隐式特征提取模块,采用CLIP模型的强泛化能力和高级语义特征学习能力,提取多模态隐式特征;通过对多模态数据的显式特征和隐式特征进行加权融合得到多层次特征向量,最后完成情感分类.在MVSA-Single和MVSA-Multiple两个数据集上进行实验验证了模型的有效性.
Multimodal Sentiment Analysis Model Based on Joint Implicit Features
In the task of multimodal sentiment analysis involving both text and images,a pre-dominant focus in existing research has been directed towards the extraction of explicit features from image-text pairs,while overlooking the high-level implicit semantic features present in multimodal data.Addressing this gap,we propose a multimodal sentiment analysis model based on joint implicit feature extraction.This model combines the explicit feature extraction modules built using the RoBERTa and VGG16 models with an implicit feature extraction module.Lever-aging the strong generalization capability and advanced semantic feature learning ability of the CLIP model,we extract implicit features from multimodal data.The explicit and implicit fea-tures of the multimodal data are then weighted and fused to obtain multi-level feature vectors,culminating in the final sentiment classification.The effectiveness of the proposed model is ex-perimentally validated on two datasets,MVSA-Single and MVSA-Multiple.

multimodaldeep learningsentiment analysis

董学祎、宫义山

展开 >

沈阳工业大学,辽宁沈阳 110000

多模态 深度学习 情感分析

2024

长江信息通信
湖北通信服务公司

长江信息通信

影响因子:0.338
ISSN:2096-9759
年,卷(期):2024.37(5)
  • 12