首页|基于特征交互模块增强RGB-骨骼动作识别鲁棒性研究

基于特征交互模块增强RGB-骨骼动作识别鲁棒性研究

扫码查看
恶意攻击者可以通过在自然样本中添加人类无法察觉的对抗噪声轻易地欺骗神经网络,从而导致分类错误.为了增强模型对抗扰动的鲁棒性,先前的研究大多关注单模态任务,对多模态场景的研究相对匮乏.为了提升多模态RGB-骨骼动作识别的鲁棒性,提出了一个基于特征交互模块(FIM)的鲁棒动作识别框架,提取对抗样本的全局信息并学习模态间的联合表征,以此来校准多模态特征.实验结果表明,面对CW攻击,该动作识别框架在NTURGB+D数据集上进行鲁棒性评估,其RI值达到 25.14%,平均鲁棒准确率也达到48.99%,比最新的MinSim+ExFMem方法分别提高了8.55和23.79个百分点,表明其在增强鲁棒性及平衡准确率方面都优于其他方法.
Study on Enhancing the Robustness of RGB-skeleton Action Recognition Based on the Feature Interaction Module
Malicious attackers can easily deceive neural networks by adding human-imperceptible adversarial noise to natural samples,leading to misclassification.To enhance the model's robustness against such adversarial perturbations,previous research has predominantly concentrated on the robustness of single-modal tasks,with insufficient exploration of multimodal scenarios.Therefore,this paper aims to improve the robustness of multimodal RGB-skeleton action recognition and introduces a robust action recognition framework based on a Feature Interaction Module(FIM),which extracts global information from adversarial samples to learn inter-modal joint representations for calibrating multi-modal features.A corresponding loss function tailored to this framework is also developed.Experimental results demonstrate that against CW attack,our method achieves a RI of 25.14%and an average robust accuracy of 48.99%on the NTURGB+D dataset,outperforming the latest SimMin+ExFMem method by 8.55 and 23.79 percentage points,respectively.These findings confirm that our approach surpasses others in enhancing robustness and balancing accuracy rates.

computer visionmultimodalRGB-skeleton action recognitionadversarial training

侯永宏、刘超、刘鑫、岳焕景、杨敬钰

展开 >

天津大学 电气自动化与信息工程学院,天津 300072

计算机视觉 多模态 RGB-骨骼动作识别 对抗训练

2024

湖南大学学报(自然科学版)
湖南大学

湖南大学学报(自然科学版)

CSTPCD北大核心
影响因子:0.651
ISSN:1674-2974
年,卷(期):2024.51(12)