首页|A survey on the vulnerability of deep neural networks against adversarial attacks

A survey on the vulnerability of deep neural networks against adversarial attacks

扫码查看
With the advancement of accelerated hardware in recent years, there has been a surge in the development and application of intelligent systems. Deep learning systems, in particular, have shown exciting results in a wide range of tasks: classification, detection, and recognition. Despite these remarkable achievements, there remains an active area critical for the safety of those systems. Deep learning algorithms have proven to be brittle against adversarial attacks. That is, carefully crafted adversarial inputs can consistently trigger an erroneous classification output from a network model. Hence, the motivation of this paper, we survey four different attacks, two adversarial defense methods on three benchmark datasets to gain a better understanding of how to protect those systems. We motivate our findings by achieving state-of-the-art accuracy and collecting empirical evidence of attack effectiveness against deep neural networks. Additionally, we leverage network explainability methods to investigate an alternative approach to defend deep neural networks.

Deep learningNeural networksAI explainabilityMachine learning security

Michel, Andy、Jha, Sumit Kumar、Ewetz, Rickard

展开 >

Univ Cent Florida

Univ Texas San Antonio

2022

Progress in Artificial Intelligence

Progress in Artificial Intelligence

EIESCI
ISSN:2192-6360
年,卷(期):2022.11(2)
  • 5
  • 35