首页|基于双通道特征融合网络的语音情感识别

基于双通道特征融合网络的语音情感识别

扫码查看
针对语音情感识别中判别性的情感特征提取难题,结合卷积神经网络和视觉transformer网络结构,提出一种双通道特征融合的语音表征方法.使用基于倒瓶颈结构的卷积模块通道,并引入类transformer训练策略提取局部频谱特征,通过改进视觉transformer提取全局序列特征,利用卷积神经网络直接提取整个语谱图代替分块部分,更好地提取时序信息,将提取到的特征信息进行融合,能够获取判别性强的情感特征,最后输入到Softmax分类器得到识别结果.在EMO-DB和CASIA数据库上进行实验,文中所提模型的平均准确率分别达到了 94.24%和 93.05%,与其他模型进行对比试验,结果优于其他模型,表明了该方法的有效性.
Speech emotion recognition based on dual channel feature fusion network
To address the problems of discriminative emotional feature extraction in speech emotion recognition,a speech representation method based on two-channel feature fusion is proposed by combining convolutional neural network and vision transformer network structure.The convolutional module channel based on the inverted bottleneck structure is introduced into a transformer like training strategy to extract local spectral features.The global sequence features are extracted by improving the vision transformer,and the whole speech spectrogram is directly extracted instead of the chunked part by using a convolutional neural network for better extraction of the temporal information,and the extracted feature information is fused to obtain strong discriminant emotion features,which are finally input to the Softmax classifier to get recognition results.Experiments on EMO-DB and CASIA databases show that the average accuracy of the modle propsed in this paper is 94.24%and 93.05%,respectively.Compared with other models,the results are better,indicating the effectiveness of the methods.

speech emotion recognitionconvolutional neural networkvision transformerfeature fusion

周晓彦、王丽丽、邵勇斌、鞠醒

展开 >

南京信息工程大学电子与信息工程学院,江苏南京 210044

语音情感识别 卷积神经网络 视觉transformer 特征融合

2024

声学技术
中科院声学所东海研究站,同济大学声学所,上海市声学学会,上海船舶电子设备研究所

声学技术

CSTPCD北大核心
影响因子:0.415
ISSN:1000-3630
年,卷(期):2024.43(6)