Robust Physical Adversarial Camouflages for Image Classifiers
Deep learning models are vulnerable to adversarial examples.As a more threatening type for practical deep learning systems,physical adversarial examples have received extensive research attention in recent years.Most of the exist-ing methods use the local adversarial patch noise to attack the image classification model in the physical world.However,the attack effect of 2D patches in 3D space would inevitably decline due to the change in the view angle.To address this is-sue,the proposed Adv-Camou method uses spatial combination transformation to generate training examples of arbitrary viewpoints and transformed backgrounds in real time.Moreover,the cross-entropy loss between the prediction class and tar-get class is minimized to make the model output the specified incorrect class.In addition,the established 3D scene can eval-uate different attacks fairly and reproducibly.The experimental results show that the coated adversarial camouflage generat-ed by the Adv-Camou method can fool image classifiers from arbitrary viewpoints.In the 3D simulation scene,the average targeted attack success rate of Adv-Camou is more than 25%higher than that of piecing together patches.The success rate of black-box targeted attacks on the Clarifai commercial classification system reaches 42%.In addition,the average attack success rate of 3D printing model experiments in the real world is about 66%,which significantly demonstrates that our method outperforms state-of-the-art methods.