Improving Transferability of Adversarial Samples Through Laplacian Smoothing Gradient
Deep neural networks are vulnerable to adversarial sample attacks due to the fragility of the model structure.Existing adversarial sample generation methods have a high white box attack rate,but their transferability is limited when attacking other DNN models.In order to improve the success rate of black box migration attack,this paper proposes a migration counterattack method using Laplacian smooth gradient.This method is improved on the gradient-based black box migration attack method.Firstly,Laplacian smoothing is used to smooth the gradient of the input image,and the smoothed gradient is input into the attack method using gradient attack for further calculation,aiming to improve the migration ability of the adversary-sample between dif-ferent models.The advantage of Laplacian smoothing is that it can effectively reduce the impact of noise and outliers on the data,thus improving the reliability and stability of the data.The approach does further improve the migration success of adversarial samples by evaluating them on multiple models,with the best migrable success rate 2%,higher than the baseline attack method.The results show that this method is of great significance to enhance the migration performance of adversarial attack algorithms,and provides a new idea for further research and application.
Deep neural networksAdversarial attackAdversarial samplesBlack-box attackTransferability