Aiming at the problems of poor smoothness,slow convergence speed and low learning efficiency of the paths planned by the traditional Q-learning algorithm,this paper proposes an improved Q-learning algorithm for mobile robot path planning.Firstly,the density of obstacles and the relative position of the start point are considered to se-lect the action set to accelerate the convergence speed of the Q-learning algorithm.Secondly,a continuous heuris-tic factor is added to the reward function,which consists of the distance between the current point and the end point,and the distance of the current point from all the obstacles in the map as well as the boundary of the map.Fi-nally,a scale factor is introduced into the initialization process of Q-value table to give the mobile robot with a pri-ori environment information,and the proposed improved Q-learning algorithm is simulated and verified in a raster map.The simulation results show that the convergence speed of the improved Q-learning algorithm is significantly improved compared with the traditional Q-learning algorithm,and its adaptability in complex environments is bet-ter,which verifies the superiority of the improved algorithm.