Defense Against Adversarial Attacks Using Perlin Noise of Different Spatial Frequencies
Adversarial Attacks interfere with the operation of a deep neural network model by adding carefully designed,hard-to-detect attack data to the model's inputs,causing it to produce erroneous outputs.Adversarial attacks can cause serious problems,and it is important to defend against them effectively.In this paper,Perlin noise with different spatial frequency characteristics was added to the model inputs and model training samples,and the attack data was masked by noise with spatial characteristics and investigated its effect on three represent-ative counter-attack methods:Fast Gradient Signed Method(FGSM),Projected Gradient Descent(PGD),and Sparse L1 Descent(SLD).The results showed that:(1)Perlin noise improved the accuracy and robustness of the model;(2)there were differences in the defense effect of Perlin noise with different spatial frequencies;(3)the defense effect of Perlin noise was better than that of using spatially unstructured noise in the face of SLD at-tack.The above results showed that Perlin noise improves the accuracy and robustness of the model,and had a good defense effect in the face of SLD attacks.
adversarial attackdefense of adversarial attacksnoise fusionPerlin noisespatial frequency