针对当前多模态情感识别算法在模态特征提取、模态间信息融合等方面存在识别准确率偏低、泛化能力较差的问题,提出了 一种基于语音、文本和表情的多模态情感识别算法.首先,设计了一种浅层特征提取网络(Sfen)和并行卷积模块(Pconv)提取语音和文本中的情感特征,通过改进的Inception-ResnetV2模型提取视频序列中的表情情感特征;其次,为强化模态间的关联性,设计了一种用于优化语音和文本特征融合的交叉注意力模块;最后,利用基于注意力的双向长短期记忆(BiLSTM based on attention mechanism,BiL-STM-Attention)模块关注重点信息,保持模态信息之间的时序相关性.实验通过对比3种模态不同的组合方式,发现预先对语音和文本进行特征融合可以显著提高识别精度.在公开情感数据集CH-SIMS和CMU-MOSI上的实验结果表明,所提出的模型取得了比基线模型更高的识别准确率,三分类和二分类准确率分别达到97.82%和98.18%,证明了该模型的有效性.
A multimodal emotion recognition algorithm based on speech,text and facial expression
Aiming at the problems of low recognition accuracy and poor generalization ability of current multi-modal emotion recognition algorithms in modal feature extraction and information fusion between modalities,a multimodal emotion recognition algorithm based on speech,text and expression is proposed.Firstly,a shallow feature extraction network(Sfen)combined with parallel convolution module(Pconv)is designed to extract the emotional features in speech and text.A modified Inception-ResnetV2 model is adopted to capture the emotional features of expression in video stream.Secondly,in order to strengthen the correlation among modal-ities,a cross attention module is designed to optimize the fusion between speech and text modalities.Finally,a bidirectional long and short-term memory module based on attention mechanism(BiLSTM-Attention)is used to focus on key information and maintain the temporal correlation between modalities.By comparing the differ-ent combinations of the three modalities,it is found that the hierarchical fusion strategy that processes speech and text in advance can obviously improve the accuracy of the model.Experimental results on the public emo-tion datasets CH-SIMS and CMU-MOSI show that the proposed model achieves higher recognition accuracy than the baseline model,with three-class and two-class accuracy reaching 97.82%and 98.18%respectively,which proves the effectiveness of the model.