控制与决策2024,Vol.39Issue(3) :768-776.DOI:10.13195/j.kzyjc.2022.1181

基于改进交叉熵的模仿学习鲁棒性增强方法

Imitation learning robustness enhancement based on modified cross entropy

李晓豪 郑海斌 王雪柯 张京京 陈晋音 王巍 赵文红
控制与决策2024,Vol.39Issue(3) :768-776.DOI:10.13195/j.kzyjc.2022.1181

基于改进交叉熵的模仿学习鲁棒性增强方法

Imitation learning robustness enhancement based on modified cross entropy

李晓豪 1郑海斌 1王雪柯 1张京京 2陈晋音 1王巍 3赵文红4
扫码查看

作者信息

  • 1. 浙江工业大学网络空间安全研究院,杭州 310023;浙江工业大学信息工程学院,杭州 310023
  • 2. 信息安全国家重点实验室,北京 100039
  • 3. 中国电子科技集团公司第三十六研究所,浙江嘉兴 314001
  • 4. 嘉兴南湖学院信息工程学院,浙江嘉兴 314001
  • 折叠

摘要

模仿学习是一种模仿专家示例的学习模式,需要大量数据样本进行监督训练,如果专家示例掺杂恶意样本或探索数据受到噪声干扰,则影响学徒学习并累积学习误差;另一方面,模仿学习使用的深度模型容易受到对抗攻击.针对模仿学习的模型安全问题,从模型损失以及模型结构两个方面分别进行防御.在模型损失方面,提出基于改进交叉熵的模仿学习鲁棒性增强方法;在模型结构方面,利用噪声网络模型提高模仿学习的鲁棒性,并结合改进交叉熵提高模型对对抗样本的抵御能力.使用3种白盒攻击及1种黑盒攻击方法进行防御性能验证,以生成对抗模仿学习为例,通过各种攻击策略验证所提出的鲁棒性增强方法的可行性以及模仿学习的脆弱性,并对模型的鲁棒性增强效果进行评估.

Abstract

Imitation learning is a learning mode characterized by the way of imitating expert examples,which requires many data samples for supervised learning.Once the expert examples are mixed with malicious examples or the exploration data is disturbed,it may affect the students'learning and accumulate learning errors.On the other hand,the deep learning model used by the imitation learning is vulnerable to adversarial attacks.Addressing to the security threat of imitation learning,this paper defends it from two aspects,including model loss and model structure.In terms of model loss,a robust enhancement method for imitation learning based on improved cross-entropy is proposed.In terms of model structure,the existing robust enhancement method for a noise network is applied to verify the robustness enhancement effect.The noise network is also combined with improved cross entropy to improve the model's robustness.Three white box attacks and one black box attack methods in deep learning are applied to imitation learning to verify the defense performance of the proposed method.Specifically,generative adversarial imitation learning(GAIL)is selected as an example.The feasibility of the robustness enhancement method and the fragility of the imitation learning model are verified by various attack strategies,and the robustness enhancement effect of the model is evaluated.

关键词

模仿学习/鲁棒性增强/改进交叉熵/噪声网络/对抗攻击

Key words

imitation learning/robustness enhancement/improved cross entropy/noise network/adversarial attack

引用本文复制引用

基金项目

国家自然科学基金(62072406)

浙江省自然科学基金(LY19F020025)

宁波市"科技创新"重大专项(2025)(2018B10063)

科技创新2030新一代人工智能重大项目(2018AAA0100801)

浙江省重点研发计划(2021C01117)

浙江省"万人计划"科技创新领军人才项目(2020R52011)

出版年

2024
控制与决策
东北大学

控制与决策

CSTPCD北大核心
影响因子:1.227
ISSN:1001-0920
参考文献量39
段落导航相关论文