Classification model of adversarial images based on RC-Net structure
[Objective]In this paper,we have developed RC-Net,namely a resilient image classification model under the ResNet architecture.It is specifically engineered to effectively discern adversarial examples in image recognition tasks and to address challenges posed by adversarial attacks in machine learning systems.With a focus on enhancing security and accuracy,particularly in critical applications such as autonomous vehicles,digital security systems,and financial fraud detection,RC-Net is tailored to fortify existing image recognition systems against sophisticated adversarial manipulations,thereby advancing the reliability of machine learning technologies.[Methods]As a classification system rooted in the residual network architecture,RC-Net strengthens its ability to identify adversarial examples in image recognition tasks through a blend of advanced feature extraction and classification techniques.Comprising two key modules,i.e.a feature extraction module utilizing the residual network and a classification rule definition module,it is iteratively trained using an adversarial training approach.Finally,three popular adversarial attack methods are improved,thus generating adversarial samples using the Mini-ImageNet dataset.[Results]The proposed model demonstrates a high recognition accuracy of 92.3%,96.1%,and 84.5%,surpassing the model recognition accuracy of the EfficientNet classification network in classifying both adversarial and non-adversarial samples.Additionally,it exhibits the highest recall rate of 95.4%in identifying model categories,showcasing its proficiency in distinguishing between adversarial and non-adversarial samples.Key developments highlighted in the paper include advancements in adversarial sample generation,the RC-Net model structure,data processing,loss function optimization,and optimizer selection and are summarized below.Advancements in Adversarial Sample Generation:The study introduces modifications to AdvGAN,FGSM,and G-ATN adversarial attack algorithms,involving changes in the feedforward network structure,normalization of input images,and adjustments in the network structure to minimize subsequent classification impact.RC-Net Model Structure:Utilizing the ResNet50 pre-trained model for feature extraction,RC-Net defines classification rules for both adversarial and non-adversarial images.Techniques such as average pooling,BN layer normalization,and leakyReLU activation function enhance the stability and the predictive accuracy.Data Processing and Loss Function Optimization:The paper underscores the significance of initial data processing and loss function optimization in improving model performance.Various data processing techniques and the use of sigmoid functions for binary classification(adversarial vs.non-adversarial samples)are explored.Optimizer Selection:The study compares the performance of different optimizers(Adam,LAMB,SGD)for RC-Net,and exhibits varying degrees of effectiveness based on the adversarial attack method employed.Experimental Results and Analysis:The feasibility of the RC-Net structure is validated through ablation experiments,while comparative experiments highlight the effectiveness of RC-Net in identifying adversarial samples.[Conclusions]The proposed model,grounded in the residual network structure,demonstrates its effectiveness in distinguishing adversarial examples in image classification.These examples,posing a security threat,are effectively addressed by RC-Net,which consists of two modules:residual network-based feature extraction and defined classification rules.Through iterative adversarial training and testing against popular adversarial attack techniques,RC-Net significantly outperforms the EfficientNet model in identifying adversarial samples,showcasing its superior ability to differentiate effectively.This research contributes a practical solution to mitigate the impact of adversarial samples in image recognition,particularly in applications with stringent security requirements.The emphasis on developing robust models against adversarial attacks lays the foundation for future research,enhancing the security and reliability of AI systems in practical applications.
adversarial attackimage classificationResNetadversarial example