Deepfake Cross-Model Defense Method Based on Generative Adversarial Network
To reduce social risks caused by the abuse of deepfake technology,an active defense method against deep forgery based on a Generative Adversarial Network(GAN)is proposed.Adversarial samples are created by adding imperceptible perturbation to original images,which significantly distorts the output of multiple forgery models.The proposed model comprises an adversarial sample generation module and an adversarial sample optimization module.The adversarial-sample generation module includes a generator and discriminator.After the generator receives an original image to generate a perturbation,the spatial distribution of the perturbation is constrained through adversarial training.By reducing the visual perception of the perturbation,the authenticity of the adversarial sample is improved.The adversarial sample optimization module comprises basic adversarial watermarking,deep forgery models,and discriminators.This module simulates black-box scenarios to attack multiple deep forgery models,thereby improving the attack and migration of adversarial samples.Training and testing are conducted on commonly used deepfake datasets Celebfaces Attributes(CelebA)and Labeled Faces in the Wild(LFW).Experimental results show that compared with existing active defense methods,the proposed method achieves a defense success rate exceeding 85%based on the cross-model active defense method and generates adversarial samples.Additionally,the method improves efficiency by 20-30 times compared with those of conventional algorithms.