首页|针对人脸识别模型的自适应对抗攻击

针对人脸识别模型的自适应对抗攻击

扫码查看
深度神经网络容易受到对抗样本的攻击,这种攻击通过对输入数据(如图像数据)的微小修改而使得AI系统分类错误.目前的许多攻击方法容易导致图像出现噪声和伪影等问题,攻击方法的泛化性较差.本文提出了一种针对人脸识别模型的对抗攻击方法,利用身份解耦技术自适应提取对人脸识别模型的判别最重要且能很好保持原始视觉效果的人脸身份特征,并利用其引导优化在StyleGAN隐空间和像素空间上进行的对抗攻击.在典型人脸识别模型和商业人脸识别系统上的攻击实验证明,本文方法生成的对抗性人脸图像在攻击成功率上比现有最优方法平均提高11%,视觉质量提高约3%.
Targeted Identity-guided Adaptive Adversarial Attackon Face Recognition Models
Deep neural networks are susceptible to adversarial attacks,which make the Al system misclassify the input data(e.g.,image data)by making small modifications to it.Many current attack methods are prone to problems such as noise and artifacts in the image,and the attack methods are poorly generalized.In this paper,an adversarial attack method for face recognition models is proposed,the identity decoupling technique is adopted to adaptively ex-tract the face identity features which are quite essential for the discrimination of the face recognition model and the o-riginal visual effect could be well maintained,so as to guide the optimization of the adversarial attack on StyleGAN latent space and pixel space.Through experiments of adversarial attack on face recognition models as well as com-mercial face recognition systems,it is proved that the method in this paper improves the success rate of the generated adversarial face image attack by an average of 11%and the visual quality by about 3%compared to SOT A.

adversarial attackface recognitionidentify featureidentity disentanglement

李澳、赵耀、倪蓉蓉、加小红

展开 >

北京交通大学计算机科学与技术学院,北京 100044

兰州交通大学电子与信息工程学院,兰州 730070

对抗攻击 人脸识别模型 身份特征 身份解耦

2024

兰州交通大学学报
兰州交通大学

兰州交通大学学报

影响因子:0.532
ISSN:1001-4373
年,卷(期):2024.43(5)