防务技术2024,Vol.34Issue(4) :295-312.DOI:10.1016/j.dt.2023.08.019

Mastering air combat game with deep reinforcement learning

Jingyu Zhu Minchi Kuang Wenqing Zhou Heng Shi Jihong Zhu Xu Han
防务技术2024,Vol.34Issue(4) :295-312.DOI:10.1016/j.dt.2023.08.019

Mastering air combat game with deep reinforcement learning

Jingyu Zhu 1Minchi Kuang 1Wenqing Zhou 2Heng Shi 1Jihong Zhu 1Xu Han3
扫码查看

作者信息

  • 1. Department of Precision Instruments,Tsinghua University,Beijing 100084,China
  • 2. Department of Computer Science and Technology,Tsinghua University,Beijing 100084,China
  • 3. Chengdu Aircraft Design & Research Institute,Aviation Industry Corporation of China,Chengdu 610000,China
  • 折叠

Abstract

Reinforcement learning has been applied to air combat problems in recent years,and the idea of cur-riculum learning is often used for reinforcement learning,but traditional curriculum learning suffers from the problem of plasticity loss in neural networks.Plasticity loss is the difficulty of learning new knowledge after the network has converged.To this end,we propose a motivational curriculum learning distributed proximal policy optimization(MCLDPPO)algorithm,through which trained agents can significantly outperform the predictive game tree and mainstream reinforcement learning methods.The motivational curriculum learning is designed to help the agent gradually improve its combat ability by observing the agent's unsatisfactory performance and providing appropriate rewards as a guide.Furthermore,a complete tactical maneuver is encapsulated based on the existing air combat knowledge,and through the flexible use of these maneuvers,some tactics beyond human knowledge can be realized.In addition,we designed an interruption mechanism for the agent to increase the frequency of decision-making when the agent faces an emergency.When the number of threats received by the agent changes,the current action is interrupted in order to reacquire observations and make decisions again.Using the interruption mechanism can significantly improve the performance of the agent.To simulate actual air combat better,we use digital twin technology to simulate real air battles and propose a parallel battlefield mechanism that can run multiple simulation environments simultaneously,effectively improving data throughput.The experimental results demonstrate that the agent can fully utilize the situational information to make reasonable decisions and provide tactical adaptation in the air combat,verifying the effectiveness of the algorithmic framework proposed in this paper.

Key words

Air combat/MCLDPPO/Interruption mechanism/Digital twin/Distributed system

引用本文复制引用

出版年

2024
防务技术
中国兵工学会

防务技术

CSTPCD
影响因子:0.358
ISSN:2214-9147
段落导航相关论文