四川大学学报(自然科学版)2024,Vol.61Issue(4) :225-231.DOI:10.19907/j.0490-6756.2024.043006

基于听觉融合特征的多声音事件检测

Multiple sound event detection based on auditory fusion features

罗吉 夏秀渝
四川大学学报(自然科学版)2024,Vol.61Issue(4) :225-231.DOI:10.19907/j.0490-6756.2024.043006

基于听觉融合特征的多声音事件检测

Multiple sound event detection based on auditory fusion features

罗吉 1夏秀渝1
扫码查看

作者信息

  • 1. 四川大学电子信息学院,成都 610064
  • 折叠

摘要

为提高多声音事件检测任务的性能,本文深入研究速动压缩非对称谐振器级联CARFAC数字耳蜗模型,并提出了基于听觉融合特征的多声音事件检测方法.该方法首先利用CARFAC提取混叠声音的神经活动模式图NAP,然后将NAP与GFCC拼接后生成融合听觉特征,并将其送入CRNN神经网络进行全监督学习,以实现对城市声音事件的检测.实验表明,在低信噪比且重叠事件较多的情况下,融合听觉特征较单独的NAP、MFCC以及GFCC等特征具有更好的鲁棒性和多声音事件检测性能.

Abstract

In order to improve the performance of multi-sound event detection task,this paper conducts an in-depth study of the Cascade of Asymmetric Resonators with Fast-Acting Compression(CARFAC)digital co-chlear model,and proposes a multi-sound event detection method based on auditory fusion features.Initially,the CARFAC is employed to extract the Neural Activity Pattern(NAP)of mixed sound.Subsequently,the NAP is concatenated with Gammatone Frequency Cepstral Coefficients(GFCC)to generate fused auditory features,which are then fed into a Convolutional Recurrent Neural Network(CRNN)for fully supervised learning to detect urban sound events.Experimental results demonstrate that,in the scenario of low signal-to-noise ratio and a higher number of overlapping events,the fused auditory features exhibit superior robustness and multi-sound event detection performance compared to individual features such as NAP,MFCC,and GFCC.

关键词

数字耳蜗模型/神经活动模式/融合听觉特征/声音事件检测/四折交叉验证

Key words

Digital cochlear model/Neural activity pattern/Fused auditory parameters/Sound event detec-tion/Four-fold cross validation

引用本文复制引用

基金项目

国家自然科学基金联合基金项目(U1733109)

出版年

2024
四川大学学报(自然科学版)
四川大学

四川大学学报(自然科学版)

CSTPCD北大核心
影响因子:0.358
ISSN:0490-6756
参考文献量4
段落导航相关论文