首页|基于共性梯度的人脸识别通用对抗攻击

基于共性梯度的人脸识别通用对抗攻击

扫码查看
人脸识别技术的恶意运用可能会导致个人信息泄露,对个人隐私安全构成巨大威胁,通过通用对抗攻击保护人脸隐私具有重要的研究意义.然而,现有的通用对抗攻击算法多数专注于图像分类任务,应用于人脸识别模型时,常面临攻击成功率低和生成扰动明显等问题.为解决这一挑战,研究提出了一种基于共性梯度的人脸识别通用对抗攻击方法.该方法通过多张人脸图像的对抗扰动的共性梯度优化通用对抗扰动,并利用主导型特征损失提升扰动的攻击能力,结合多阶段训练策略,实现了攻击效果与视觉质量的均衡.在公开数据集上的实验证明,该方法在人脸识别模型上的攻击性能优于Cos-UAP、SGA等方法,并且生成的对抗样本具有更好的视觉效果,表明了所提方法的有效性.
Universal Adversarial Attack for Face Recognition Based on Commonality Gradient
The malicious use of facial recognition technology may lead to personal information leakage,posing a significant threat to individual privacy security.Safeguarding facial privacy through universal adversarial attacks holds crucial research significance.However,existing universal adversarial attack algorithms primarily focus on image classification tasks.When applied to facial recognition models,they often encounter challenges such as low attack success rates and noticeable perturbation generation.To address these challenges,this study proposes a universal adversarial attack method for face recognition based on commonality gradients.This method optimizes universal adversarial perturbation through the common gradient of the adversarial perturbations of multiple face images and uses dominant feature loss to improve the attack capability of the perturbation.Combined with the multi-stage training strategy,it achieves a balance between attack effect and visual quality.Experiments on public datasets prove that the method outperforms methods such as Cos-UAP and SGA in the attack performance on facial recognition models,and the generated adversarial samples have better visual effects,indicating the effectiveness of the proposed method.

face recognitionadversarial exampleuniversal adversarial attackcommonality gradientpersonal privacy security

段伟、高陈强、李鹏程、朱常杰

展开 >

重庆邮电大学通信与信息工程学院,重庆 400065

信号与信息处理重庆市重点实验室,重庆 400065

人脸识别 对抗样本 通用对抗攻击 共性梯度 个人隐私安全

国家自然科学基金国家自然科学基金重庆市教委科学技术研究计划

6217603562201111KJZD-K202100606

2024

计算机系统应用
中国科学院软件研究所

计算机系统应用

CSTPCD
影响因子:0.449
ISSN:1003-3254
年,卷(期):2024.33(8)