Face image de-identification with class universal perturbations based on triplet constraints
Objective With the development of face recognition technology,face images have been used as identity verifica-tion in many fields.As important biometric features,face images usually involve personal identity information.When ille-gally obtained and used by attackers,these images may cause serious losses and harm to individuals.Protecting face pri-vacy and security has always been an urgent problem.The de-identification of face image is conducted in this paper,and the convenient and efficient use of class universal perturbation for face privacy protection is studied.The class universal perturbation method generates exclusive perturbation information for each user,and then the exclusive perturbation is superimposed on the face image for de-identification,thus resisting the behavior of deep face recognizer maliciously analyz-ing user information.In view of the limited face images provided by users,using class universal perturbation to de-identify users often faces the problem of insufficient samples.In addition,extracting face image features can be difficult due to variations in shooting angles,which increase the difficulty of learning user features through class universal perturbation.At the same time,class universal perturbation faces a complex protection scenario.Class universal perturbation is generated from a local proxy model and needs to be able to resist different face recognition models.These face recognition models use different datasets,loss functions,and network architectures,thus increasing the difficulty of generating class universal per-turbation with transferability.In view of the insufficient user training data and the need to further improve the protection effect of perturbation in the field of the class universal perturbation,a generation method of class universal perturbation con-strained by the triplet loss function is proposed in this paper,called face image de-identification with class universal pertur-bations based on triplet constraints(TC-CUAP).The negative samples are constructed based on the feature subspace to augment the training data and to obtain samples in triplets.Method The Res-Net50 deep neural network is adopted to extract the features of user face images,which are used as positive samples for training.The feature subspace is then con-structed using three affine combination methods(i.e.,affine hull,convex hull,and class center)of positive samples.The maximum distance between the samples and feature subspace is solved by the convex optimization method.The training samples are optimized along the direction away from the feature subspace,and the optimized samples are labeled as nega-tive samples.Perturbations are randomly generated as initial values for class general perturbations before they are added to the original image.The features are then extracted from the perturbed images to obtain the training samples.The positive,negative,and training samples constitute the triplet required for training.The cosine distance is measured when training perturbations.The distance between the training samples and positive samples is maximized,while that between the train-ing samples and negative samples is minimized.The training sample moves closer to the negative sample when the former is equidistant from the positive sample,thus allowing the perturbations to learn more adversarial information within a limited range.A scaling transformation is then applied to the generated perturbation.Those parts of the perturbation whose values are greater than 0 are set to the upper limit value of the perturbation threshold,while those parts whose values are less than 0 are set to the lower limit value.The class universal perturbation is ultimately obtained.Result The data required for the experiment are taken from the MegaFace challenge,MSCeleb-1M,and LFW datasets.The Privacy-Common public data-set,which represents ordinary users,and the Privacy-Celebrities celebrity dataset,which represents celebrity users,are then constructed,and test sets corresponding to these two datasets are built using data from the MegaFace challenge,MSCeleb-1M,and LFW datasets.Black box tests are conducted on the Privacy-Common and Privacy-Celebrities datasets for face recognition models with different loss functions and network architectures.Three of the black box models use differ-ent loss functions,namely,CosFace,ArcFace,and SFace,while the other three black box models use different network architectures,namely,SENet,MobileNet,and IResNet variants.The proposed TC-CUAP is then compared with general-izable data-free objective for crafting universal perturbations(GD-UAP),generative adversarial perturbations(GAP),uni-versal adversarial perturbations(UAP),and one person one mask(OPOM).In the Privacy-Commons dataset,the highest Top-1 protection success rates of each method in the face of different face recognition models are 8.7%(GD-UAP),59.7%(GAP),64.2%(UAP),86.5%(OPOM),and 90.6%(TC-CUAP),while the highest Top-5 protection success rates are 3.5%(GD-UAP),46.7%(GAP),51.7%(UAP),80.1%(OPOM),and 85.8%(TC-CUAP).Compared with the well-known OPOM method,the TC-CUAP method improved its protection success rate by an average of 5.74%.In the Privacy-Celebrities data set,the highest Top-1 protection success rates of each method in the face of different face recogni-tion models are 10.7%(GD-UAP),53.3%(GAP),59%(UAP),69.6%(OPOM),and 75.9%(TC-CUAP),while the highest Top-5 protection success rates are 4.2%(GD-UAP),42.7%(GAP),47.8%(UAP),60.6%(OPOM),and 67.9%(TC-CUAP).Compared with the well-known OPOM method,the TC-CUAP method improved its protection success rate by an average of 5.81%.The time spent to generate perturbations for 500 users is used as an indicator to measure the efficiency of each method.The time consumption of each method is 19.44 min(OPOM),10.41 min(UAP),6.52 min(TC-CUAP),4.51 min(GAP),and 1.12 min(GD-UAP).The above experimental results verify the superiority of the TC-CUAP method in face de-identification and its transferability on different models.The TC-CUAP method with perturba-tion scaling transformation achieves average Top-1 protection success rates of 80%and 64.6%on the Privacy-Commons and Privacy-Celebrities datasets,respectively,while the TC-CUAP method without perturbation scaling transformation achieves average Top-1 protection success rates of 78.1%and 62.5.1%.The TC-CUAP method with perturbation scaling transformation increased the protection success rate by about 2%,thus proving its effectiveness.In addition to using con-vex hull to model the user feature subspace and generate negative samples,these samples can also be constructed using fea-ture iterative universal adversarial perturbations(FI-UAP),FI-UAP incorporating intra-class interactions(FI-UAP+),and Gauss random perturbation.On the Privacy-Commons and Privacy-Celebrities datasets,these methods obtain the high-est Top-1 protection success rates of 85.6%(FI-UAP),86%(FI-UAP+),44.8%(Gauss),and 90.6%(convex hull).Using convex hull yields a 4.9%higher average protection success rate than using the suboptimal FI-UAP+method,thereby verifying the rationality of the negative sample construction described in this paper.Conclusion The proposed method uses positive,negative,and training samples as constraints to obtain the class universal perturbation for face image de-identification.The negative samples are constructed from the original training data,thus alleviating the problem of insufficient training samples.The class universal perturbation trained by these triple constraints provides the feature attack information.At the same time,the introduction of perturbation scaling increases the strength of class universal perturbation and improves the face image de-identification effect.The superiority of this method is further verified by comparing its face de-identification performance with that of GD-UAP,GAP,UAP,and OPOM.
class universal perturbationtriplet constraintface image de-identificationdata augmentationface privacy protection