首页|Deep-reinforcement-learning-based UAV autonomous navigation and collision avoidance in unknown environments
Deep-reinforcement-learning-based UAV autonomous navigation and collision avoidance in unknown environments
扫码查看
点击上方二维码区域,可以放大扫码查看
原文链接
NETL
NSTL
万方数据
In some military application scenarios,Unmanned Aerial Vehicles(UAVs)need to per-form missions with the assistance of on-board cameras when radar is not available and communi-cation is interrupted,which brings challenges for UAV autonomous navigation and collision avoidance.In this paper,an improved deep-reinforcement-learning algorithm,Deep Q-Network with a Faster R-CNN model and a Data Deposit Mechanism(FRDDM-DQN),is proposed.A Faster R-CNN model(FR)is introduced and optimized to obtain the ability to extract obstacle information from images,and a new replay memory Data Deposit Mechanism(DDM)is designed to train an agent with a better performance.During training,a two-part training approach is used to reduce the time spent on training as well as retraining when the scenario changes.In order to verify the performance of the proposed method,a series of experiments,including training experi-ments,test experiments,and typical episodes experiments,is conducted in a 3D simulation environ-ment.Experimental results show that the agent trained by the proposed FRDDM-DQN has the ability to navigate autonomously and avoid collisions,and performs better compared to the FR-DQN,FR-DDQN,FR-Dueling DQN,YOLO-based YDDM-DQN,and original FR output-based FR-ODQN.
Faster R-CNN modelReplay memory Data Deposit Mechanism(DDM)Two-part training approachImage-based Autonomous Navigation and Collision Avoidance(ANCA)Unmanned Aerial Vehicle(UAV)
Fei WANG、Xiaoping ZHU、Zhou ZHOU、Yang TANG
展开 >
School of Astronautics,Northwestern Polytechnical University,Xi'an 710072,China
School of Aeronautics,Northwestern Polytechnical University,Xi'an 710072,China