Towards A Modular Perspective to Boost Ensemble Attacks
Adversarial examples reveal the vulnerabilities of neural network models while also serving as a crucial tool for assessing their robustness.The transferability of adversarial examples to attack unknown network models enhances their applicability in real-world scenarios.Traditional ensemble methods,characterized by their coarse granularity,constrain the transferability of adversarial examples.This paper introduces a novel approach from a modular perspective.Initially,the basic steps of traditional ensemble methods are finely restructured,so as to adjust their granularity and further abstract them into individual fundamental modules.Subsequently,these modules are categorized into two distinct classes,with each class assigned specific responsibilities,focusing on executing singular and precise tasks.Finally,incorporating a momentum mechanism into this method significantly enhances the transferability of adversarial examples.Experimental results demonstrate that the proposed method achieves substantial improvements across various ensemble strategies,thus confirming its effectiveness.