首页|Understanding adversarial attacks on observations in deep reinforcement learning

Understanding adversarial attacks on observations in deep reinforcement learning

扫码查看
Deep reinforcement learning models are vulnerable to adversarial attacks that can decrease the cumulative expected reward of a victim by manipulating its observations.Despite the efficiency of previous optimization-based methods for generating adversarial noise in supervised learning,such methods might not achieve the lowest cumulative reward since they do not generally explore the environmental dynamics.Herein,a framework is provided to better understand the existing methods by reformulating the problem of adversarial attacks on reinforcement learning in the function space.The reformulation approach adopted herein generates an optimal adversary in the function space of targeted attacks,repelling them via a generic two-stage framework.In the first stage,a deceptive policy is trained by hacking the environment and discovering a set of trajectories routing to the lowest reward or the worst-case performance.Next,the adversary misleads the victim to imitate the deceptive policy by perturbing the observations.Compared to existing approaches,it is theoretically shown that our adversary is strong under an appropriate noise level.Extensive experiments demonstrate the superiority of the proposed method in terms of efficiency and effectiveness,achieving state-of-the-art performance in both Atari and MuJoCo environments.

deep learningreinforcement learningadversarial robustnessadversarial attack

You QIAOBEN、Chengyang YING、Xinning ZHOU、Hang SU、Jun ZHU、Bo ZHANG

展开 >

Department of Computer Science and Technology,Beijing National Research Center for Information Science and Technology,Tsinghua-Bosch Joint Center for Machine Learning,Institute for Artificial Intelligence,Tsinghua University,Beijing 100084,China

Peng Cheng Laboratory,Shenzhen 518055,China

National Key Research and Development Program of ChinaNational Key Research and Development Program of ChinaNational Natural Science Foundation of ChinaNational Natural Science Foundation of ChinaNational Natural Science Foundation of ChinaNational Natural Science Foundation of ChinaNational Natural Science Foundation of ChinaNational Natural Science Foundation of ChinaNational Natural Science Foundation of ChinaBeijing NSF ProjectBeijing Academy of Artificial Intelligence(BAAI)Tsinghua-Huawei Joint Research ProgramTsinghua Institute for Guo QiangTsinghua-OPPO Joint Research Center for Future Terminal TechnologyTsinghua-China Mobile Communications Group Co.,Ltd.Joint Institute

2020AAA01043042017YFA070090461620106010620611360016162113600862076147U19B2034U1811461U19A2081JQ19016

2024

中国科学:信息科学(英文版)
中国科学院

中国科学:信息科学(英文版)

CSTPCDEI
影响因子:0.715
ISSN:1674-733X
年,卷(期):2024.67(5)
  • 33