Robust models have a better protective impact than non-robust models when it comes to protecting adversarial examples in AI security.The model created through confrontation training outperforms the non-robust model in terms of generalization performance for adversarial examples.It also produces more interpretable saliency maps.To better explain the nature of adversarial training,the saliency map explains why the robust model performs well in terms of generalization,demonstrates that the robust model can learn the key characteristics of examples and evaluates input examples using these characteristics.