Imitation learning is a learning mode characterized by the way of imitating expert examples,which requires many data samples for supervised learning.Once the expert examples are mixed with malicious examples or the exploration data is disturbed,it may affect the students'learning and accumulate learning errors.On the other hand,the deep learning model used by the imitation learning is vulnerable to adversarial attacks.Addressing to the security threat of imitation learning,this paper defends it from two aspects,including model loss and model structure.In terms of model loss,a robust enhancement method for imitation learning based on improved cross-entropy is proposed.In terms of model structure,the existing robust enhancement method for a noise network is applied to verify the robustness enhancement effect.The noise network is also combined with improved cross entropy to improve the model's robustness.Three white box attacks and one black box attack methods in deep learning are applied to imitation learning to verify the defense performance of the proposed method.Specifically,generative adversarial imitation learning(GAIL)is selected as an example.The feasibility of the robustness enhancement method and the fragility of the imitation learning model are verified by various attack strategies,and the robustness enhancement effect of the model is evaluated.