首页|基于改进分层DQN算法的智能体路径规划

基于改进分层DQN算法的智能体路径规划

扫码查看
针对智能体使用DQN(Deep Q Network)算法进行路径规划时存在收敛速度慢、Q值难以准确描述动作好坏的问题,提出一种优化DQN模型结构的分层DQN算法。该算法建立的激励层和动作层叠加生成更为准确的Q值,用于选择最优动作,使整个网络的抗干扰能力更强。仿真结果表明,智能体使用分层DQN算法的收敛速度更快,从而验证了算法的有效性。
PATH PLANNING FOR AGENT BASED ON IMPROVED LAYERED DQN ALGORITHM
In order to solve the problems that the convergence speed is slow and it is difficult for Q value to describe the action accurately when an agent uses DQN algorithm in the process of path planning,a layered DQN algorithm optimized by the model structure of DQN is proposed.The excitation layer and the action layer built by the algorithm were superimposed to generate a more accurate Q value,which was used to select the optimal action and make the anti-interference ability of the whole network stronger.The simulation results show that the agent using layered DQN algorithm has a faster convergence speed,thus verifying the feasibility and effectiveness of the algorithm.

Layered DQNNeural networkReinforcement learningPath planning

杨尚志、张刚、陈跃华、何小龙

展开 >

宁波大学海运学院 浙江宁波 315211

分层DQN 神经网络 强化学习 路径规划

国家自然科学基金浙江省重点研发计划

516752862018C02G2070536

2024

计算机应用与软件
上海市计算技术研究所 上海计算机软件技术开发中心

计算机应用与软件

CSTPCD北大核心
影响因子:0.615
ISSN:1000-386X
年,卷(期):2024.41(5)
  • 20