首页|Study Results from University of Transport and Communications Update Understanding of Robotics [Experimental Research on Avoidance Obstacle Control for Mobile Robots Using Q-Learning (QL) and Deep Q-Learning (DQL) Algorithms in Dynamic ...]
Study Results from University of Transport and Communications Update Understanding of Robotics [Experimental Research on Avoidance Obstacle Control for Mobile Robots Using Q-Learning (QL) and Deep Q-Learning (DQL) Algorithms in Dynamic ...]
扫码查看
点击上方二维码区域,可以放大扫码查看
原文链接
NETL
NSTL
Mdpi
Investigators discuss new findings in robotics. According to news originating from Hanoi, Vietnam, by NewsRx correspondents, research stated, “This study provides simulation and experimental results on techniques for avoiding static and dynamic obstacles using a deep Q-learning (DQL) reinforcement learning algorithm for a two-wheel mobile robot with independent control.” The news editors obtained a quote from the research from University of Transport and Communications: “This method integrates the Q-learning (QL) algorithm with a neural network, where the neural networks in the DQL algorithm act as approximators for the Q matrix table for each pair (state-action). The effectiveness of the proposed solution was confirmed through simulations, programming, and practical experimentation. A comparison was drawn between the DQL algorithm and the QL algorithm. Initially, the mobile robot was connected to the control script using the Robot Operating System (ROS). The mobile robot was programmed in Python within the ROS operating system, and the DQL controller was programmed in Gazebo software. The mobile robot underwent testing in a workshop with various experimental scenarios considered.”
University of Transport and CommunicationsHanoiVietnamAsiaAlgorithmsEmerging TechnologiesMachine LearningNano-robotRobotRobotics