Immune defense against adversarial attacks via hourglass data-processing units and group RBF units
Deep neural network ( DNN) is vulnerable to adversarial examples with imperceptible perturbation to clean im-ages. To counter this issue,researchers proposed many powerful defensive methods,which can be categorized into external defense methods ( EDMs) and immune defense methods ( IDMs) . EDMs try to purify the adversarial exam-ples before they are fed into DNNs,while IDMs try to robustify the DNNs per se. This work focuses on IDMs. Most of the existing IDMs boost robustness mainly via using robust optimization strategies rather than building robust mod-ules for DNNs. This work introduces two new robust units into DNNs:the hourglass data-processing units,based on feature squeezing and precision injection,for reducing adversarial perturbations,and the group RBF units for en-hancing nonlinearity and handling intra-class variations. This work also uses label smoothing,annealing strategy and weight decay during optimization to further boost robustness. Extensive experiments on two public datasets,MNIST and CIFAR-10,and two popular DNNs,LeNet5 and VGG16,demonstrate that integrating the proposed ro-bust units into DNNs could greatly improve their immune abilities against adversarial attacks while keeping their original recognition performance on clean samples.
immune defenseprecision injectiongroup radial basis function ( RBF)weight decay