首页|Multimodal Attentive Fusion Network for audio-visual event recognition
Multimodal Attentive Fusion Network for audio-visual event recognition
扫码查看
点击上方二维码区域,可以放大扫码查看
原文链接
NSTL
Elsevier
Event classification is inherently sequential and multimodal. Therefore, deep neural models need to dynamically focus on the most relevant time window and/or modality of a video. In this study, we propose the Multimodal Attentive Fusion Network (MAFnet), an architecture that can dynamically fuse visual and audio information for event recognition. Inspired by prior studies in neuroscience, we couple both modalities at different levels of visual and audio paths. Furthermore, the network dynamically highlights a modality at a given time window relevant to classify events. Experimental results in AVE (Audio-Visual Event), UCF51, and Kinetics-Sounds datasets show that the approach can effectively improve the accuracy in audio-visual event classification. Code is available at: https://github.com/numediart/MAFnet
Audio-visual fusionModality conditioningAttentionMultimodal deep learningEvent recognition