首页|Combining Innovative CVTNet and Regularization Loss for Robust Adversarial Defense
Combining Innovative CVTNet and Regularization Loss for Robust Adversarial Defense
扫码查看
点击上方二维码区域,可以放大扫码查看
原文链接
万方数据
维普
Deep neural networks(DNNs)are vulnerable to elaborately crafted and imperceptible adversarial perturba-tions.With the continuous development of adversarial attack methods,existing defense algorithms can no longer defend against them proficiently.Meanwhile,numerous studies have shown that vision transformer(ViT)has stronger robustness and generalization performance than the convolutional neural network(CNN)in various domains.Moreover,because the standard denoiser is subject to the error amplification effect,the prediction network cannot correctly classify all recon-struction examples.Firstly,this paper proposes a defense network(CVTNet)that combines CNNs and ViTs that is ap-pended in front of the prediction network.CVTNet can effectively eliminate adversarial perturbations and maintain high robustness.Furthermore,this paper proposes a regularization loss(LCPL),which optimizes the CVTNet by computing dif-ferent losses for the correct prediction set(CPS)and the wrong prediction set(WPS)of the reconstruction examples,re-spectively.The evaluation results on several standard benchmark datasets show that CVTNet performs better robustness than other advanced methods.Compared with state-of-the-art algorithms,the proposed CVTNet defense improves the av-erage accuracy of pixel-constrained attack examples generated on the CIFAR-10 dataset by 24.25%and spatially-con-strained attack examples by 14.06%.Moreover,CVTNet shows excellent generalizability in cross-model protection.
deep learningadversarial defensevision transformerimage reconstruction
王卫东、李智、张丽
展开 >
Laboratory of Public Big Data,School of Computer Science and Technology,Guizhou University,Guiyang 550025,China