首页|基于听觉融合特征的多声音事件检测

基于听觉融合特征的多声音事件检测

扫码查看
为提高多声音事件检测任务的性能,本文深入研究速动压缩非对称谐振器级联CARFAC数字耳蜗模型,并提出了基于听觉融合特征的多声音事件检测方法。该方法首先利用CARFAC提取混叠声音的神经活动模式图NAP,然后将NAP与GFCC拼接后生成融合听觉特征,并将其送入CRNN神经网络进行全监督学习,以实现对城市声音事件的检测。实验表明,在低信噪比且重叠事件较多的情况下,融合听觉特征较单独的NAP、MFCC以及GFCC等特征具有更好的鲁棒性和多声音事件检测性能。
Multiple sound event detection based on auditory fusion features
In order to improve the performance of multi-sound event detection task,this paper conducts an in-depth study of the Cascade of Asymmetric Resonators with Fast-Acting Compression(CARFAC)digital co-chlear model,and proposes a multi-sound event detection method based on auditory fusion features.Initially,the CARFAC is employed to extract the Neural Activity Pattern(NAP)of mixed sound.Subsequently,the NAP is concatenated with Gammatone Frequency Cepstral Coefficients(GFCC)to generate fused auditory features,which are then fed into a Convolutional Recurrent Neural Network(CRNN)for fully supervised learning to detect urban sound events.Experimental results demonstrate that,in the scenario of low signal-to-noise ratio and a higher number of overlapping events,the fused auditory features exhibit superior robustness and multi-sound event detection performance compared to individual features such as NAP,MFCC,and GFCC.

Digital cochlear modelNeural activity patternFused auditory parametersSound event detec-tionFour-fold cross validation

罗吉、夏秀渝

展开 >

四川大学电子信息学院,成都 610064

数字耳蜗模型 神经活动模式 融合听觉特征 声音事件检测 四折交叉验证

国家自然科学基金联合基金项目

U1733109

2024

四川大学学报(自然科学版)
四川大学

四川大学学报(自然科学版)

CSTPCD北大核心
影响因子:0.358
ISSN:0490-6756
年,卷(期):2024.61(4)
  • 4