首页|用于移动机器人路径规划的改进强化学习算法

用于移动机器人路径规划的改进强化学习算法

Improved reinforcement learning algorithm for mobile robot path planning

扫码查看
针对传统Q-learning算法规划出的路径存在平滑度差、收敛速度慢以及学习效率低的问题,本文提出一种用于移动机器人路径规划的改进Q-learning算法.首先,考虑障碍物密度及起始点相对位置来选择动作集,以加快Q-learning算法的收敛速度;其次,为奖励函数加入一个连续的启发因子,启发因子由当前点与终点的距离和当前点距地图中所有障碍物以及地图边界的距离组成;最后,在Q值表的初始化进程中引入尺度因子,给移动机器人提供先验环境信息,并在栅格地图中对所提出的改进Q-learning算法进行仿真验证.仿真结果表明,改进Q-learning算法相比传统Q-learning算法收敛速度有明显提高,在复杂环境中的适应性更好,验证了改进算法的优越性.
Aiming at the problems of poor smoothness,slow convergence speed and low learning efficiency of the paths planned by the traditional Q-learning algorithm,this paper proposes an improved Q-learning algorithm for mobile robot path planning.Firstly,the density of obstacles and the relative position of the start point are considered to se-lect the action set to accelerate the convergence speed of the Q-learning algorithm.Secondly,a continuous heuris-tic factor is added to the reward function,which consists of the distance between the current point and the end point,and the distance of the current point from all the obstacles in the map as well as the boundary of the map.Fi-nally,a scale factor is introduced into the initialization process of Q-value table to give the mobile robot with a pri-ori environment information,and the proposed improved Q-learning algorithm is simulated and verified in a raster map.The simulation results show that the convergence speed of the improved Q-learning algorithm is significantly improved compared with the traditional Q-learning algorithm,and its adaptability in complex environments is bet-ter,which verifies the superiority of the improved algorithm.

reinforcement learningpath planningheuristic reward functionQ-value initialization

张威、初泽源、杨玉涛、王伟

展开 >

中国民航大学航空工程学院,天津 300300

中国民航航空地面特种设备研究基地,天津 300300

民航智慧机场理论与系统重点实验室,广州 510470

中国民航大学安全科学与工程学院,天津 300300

展开 >

强化学习 路径规划 启发式奖励函数 Q值初始化

2024

中国民航大学学报
中国民航大学

中国民航大学学报

影响因子:0.363
ISSN:1674-5590
年,卷(期):2024.42(5)