Research on Robot Navigation Method Integrating Safe Convex Space and Deep Reinforcement Learning
A robot navigation method based on deep reinforcement learning(DRL)is proposed for navigating the a robot in the scenario where the global map is unknown and there are dynamic and static obstacles in the environment.Compared to other DRL-based navigation methods applied in complex dynamic environment,the improvements in the designs of action space,state space,and reward function are introduced into the proposed method.Additionally,the proposed method separates the control process from neural network,thus facilitating the simulation research to be effectively implemented in practice.Specifically,the action space is defined by intersecting the safe convex space,calculated from 2D Lidar data,with the kinematic limits of robot.This intersection narrows down the feasible trajectory search space while meeting both short-term dynamic obstacle avoidance and long-term global navigation needs.Reference points are sampled from this action space to form a reference trajectory that the robot follows using a model predictive control(MPC)algorithm.The method also incorporates additional elements such as safe convex space and reference points in the design of state space and reward function.Ablation studies demonstrate the superior navigation success rate,reduced time consumption,and robust generalization capabilities of the proposed method in various static and dynamic environments.