首页|A survey on the vulnerability of deep neural networks against adversarial attacks
A survey on the vulnerability of deep neural networks against adversarial attacks
扫码查看
点击上方二维码区域,可以放大扫码查看
原文链接
NSTL
Springer Nature
With the advancement of accelerated hardware in recent years, there has been a surge in the development and application of intelligent systems. Deep learning systems, in particular, have shown exciting results in a wide range of tasks: classification, detection, and recognition. Despite these remarkable achievements, there remains an active area critical for the safety of those systems. Deep learning algorithms have proven to be brittle against adversarial attacks. That is, carefully crafted adversarial inputs can consistently trigger an erroneous classification output from a network model. Hence, the motivation of this paper, we survey four different attacks, two adversarial defense methods on three benchmark datasets to gain a better understanding of how to protect those systems. We motivate our findings by achieving state-of-the-art accuracy and collecting empirical evidence of attack effectiveness against deep neural networks. Additionally, we leverage network explainability methods to investigate an alternative approach to defend deep neural networks.
Deep learningNeural networksAI explainabilityMachine learning security