With the development of the generative adversarial network(GAN)technology in recent years,facial manipula-tion technology has advanced significantly in both academia and industry.In particular,the deep face forgery model,repre-sented by Deepfake has been widely used on the internet.The term"Deepfake"is a portmanteau of"deep learning"and"fake".It refers to a face modification technology based on deep learning that can modify faces in videos and images,including face swapping,face expression editing,and face attribute editing.Deepfake can be roughly divided into two cat-egories:identity-agnostic and identity-related manipulations.Face swapping is classified under identity-related manipula-tion;it aims to replace the target face area with the original face.Meanwhile,face expression and face attribute editing are classified under identity-agnostic manipulation.They attempt to modify the attributes of a face,such as its expression,hair color,age,and gender,without transforming identity.On the one hand,Deepfake technology has been widely used in film special effects,advertising,and entertainment apps.For example,some films have achieved more realistic and low-cost special effects by using such technology.For customers,the model on screen can be personalized in accordance with their body dimensions,color,and hair type before purchasing products.Simultaneously,Deepfake has inspired an increasing number of entertainment applications,such as ZAO,MeituXiuxiu,and FaceApp,which have considerably lowered the threshold of using this technology.Through these applications,users can easily replace the faces of actors in movies or tele-vision dramas with their own faces or change their hair color or makeup at will.On the other hand,Deepfake forgery is cur-rently being applied to some scenarios that may cause adverse effects.For example,one of the most notorious Deepfake applications,DeepNude,attempts to replace the face of a porn actor with one of a star,causing serious damage to the indi-vidual privacy and even the personal reputation of citizens.In addition,Deepfake with target attributes may pass the verifi-cation of commercial applications,threatening application security and harming the property of the person who has been impersonated.To date,the fake news in which a politician speaks a speech that does not belong to him/her also poses a serious threat to social stability and national security.On this basis,some defense methods of Deepfake forgery have been proposed.Existing defense technologies can be roughly divided into two categories:passive defense and proactive defense.Passive defense is primarily based on detection.Despite their considerable accuracy,these detectors are simply passive measures against Deepfake attacks because they cannot eliminate the negative effects of the fake content that has been gen-erated and widely disseminated.In summary,achieving prior defense is difficult and cannot intervene in the generation of Deepfake faces.Therefore,current mainstream belief assumes that proactive defense techniques are more defensive and practical.In contrast with passive defense,proactive defense disrupts Deepfake proactively by adding special adversarial perturbations or watermarks to the source images or videos before they are shared online.When a malicious user attempts to use them for Deepfake forgery,the output of the Deepfake forgery model will be seriously damaged in terms of visual quality and cannot be successfully forged.Moreover,even if indistinguishable fake images are obtained,we can trace the source through the forged images to find the malicious user.The present study principally reviews currently available Deepfake proactive defense techniques.Our overview is focused on the following perspectives:1)a brief introduction of Deepfake forgery technologies and their effects;2)a systematic summary of current proactive defense algorithms for Deepfake forg-ery,including technical principles,classification,performance,datasets,and evaluation methods;and 3)a description of the challenges faced by Deepfake proactive defense and a discussion of its future directions.From the perspective of the defense target,Deepfake forgery proactive defense can be divided into proactive disruption and proactive forensics defense technologies.Proactive disruption defense technology can be subdivided from the point of view of technical implementation into data poisoning,adversarial attack,and latent space defense methods.The data poisoning defense method destroys Deepfake forgery during the training stage,requiring the faker to use the poisoned images as training data to train the Deep-fake forgery model.Meanwhile,forgery destruction of the adversarial attack defense method works in the test stage.When the faker uses the well-trained Deepfake forgery model to manipulate face images with adversarial perturbations,the output image will be destroyed.This idea of defense based on adversarial attack is the most widely used in existing studies.When implementing latent space defense methods,perturbations are not added directly to an image.By contrast,an image is first mapped into latent space,and this mapping is implemented with an elaborate transformation,such that the image is pro-tected from the threat of Deepfake forgery.Notably,this method relies heavily on the effect of GAN inversion technology.We then provide a brief introduction of the evaluation methods and datasets used in proactive defense.The evaluation of a defense technology is typically performed from two aspects:the effect of disrupting the output of the Deepfake forgery model and the effect of maintaining the visual quality of disturbed images.These technologies are generally evaluated in terms of pixel distance,feature distance,and attack success rate.Simultaneously,some commonly used facial indicators,such as structural similarity index measure,Frechet inception distance,and normalization mean error,are considered during evalu-ation.Finally,we expound the challenges faced by Deepfake proactive defense,including the circumvention of proactive defense,the improvement of performance in black box scenarios,and practicality issues.In addition,we look forward to the future directions of proactive defense.More robust performance and better visual quality are identified as two major con-cerns.In conclusion,our survey summarizes the principal concept and classification of Deepfake proactive defense and provides detailed explanations of various methods,evaluation metrics,commonly used datasets,major challenges,and prospects.We hope that it will serve as an introduction and guide for Deepfake proactive defense research.