首页|基于混合采样深度Q网络的水面无人艇逃脱策略

基于混合采样深度Q网络的水面无人艇逃脱策略

扫码查看
[目的]针对敌方船舶采用合围战术,研究我方无人艇(USV)被敌方船舶包围情况下的逃跑策略规划问题.[方法]提出一种混合采样深度Q网络(HS-DQN)强化学习算法,逐步增加重要样本的回放频率,并保留一定的探索性,防止算法陷入局部最优.设计状态空间、动作空间和奖励函数,通过训练获得最优的USV逃跑策略,并从奖励值和逃脱成功率方面与DQN算法进行对比.[结果]仿真结果表明,使用HS-DQN算法进行训练,逃脱成功率提高 2%,算法的收敛速度提高了 20%.[结论]HS-DQN算法可以减少USV无效探索的次数,并加快算法的收敛速度,仿真实验验证了USV逃跑策略的有效性.
Unmanned surface vehicle escape strategy based on hybrid sampling deep Q-network
[Objective]Aiming at the encirclement tactics adopted by enemy ships,this study focuses on the problem of planning an escape strategy when an unmanned surface vehicle(USV)is surrounded by enemy ships.[Methods]A hybrid sampling deep Q-network(HS-DQN)reinforcement learning algorithm is pro-posed which gradually increases the playback frequency of important samples and retains a certain level of ex-ploration to prevent it from falling into local optimization.The state space,action space and reward function are designed to obtain the USV's optimal escape strategy,and its performance is compared with that of the deep Q-network(DQN)algorithm in terms of reward and escape success rate.[Results]The simulation res-ults show that using the HS-DQN algorithm for training increases the escape success rate by 2%and the con-vergence speed by 20%.[Conclusions]The HS-DQN algorithm can reduce the number of useless explora-tions and speed up the convergence of the algorithm.The simulation results verify the effectiveness of the USV escape strategy.

USVApollonius circlepursuit-evasiondeep reinforcement learninghybrid sampling

杨远鹏、宋利飞、茅嘉琪、李一、陈侯京

展开 >

中国船舶集团有限公司系统工程研究院,北京 100094

武汉理工大学 高性能船舶技术教育部重点实验室,湖北 武汉 430063

中国舰船研究设计中心,湖北 武汉 430064

无人艇 阿波罗尼奥斯圆 围捕-逃跑 深度强化学习 混合采样

国家自然科学基金项目资助

51809203

2024

中国舰船研究
中国舰船研究设计中心

中国舰船研究

CSTPCD北大核心
影响因子:0.496
ISSN:1673-3185
年,卷(期):2024.19(1)
  • 19