Robotics & Machine Learning Daily News2024,Issue(Jun.4) :28-28.

Studies from Tongji University Reveal New Findings on Robotics (Relaxing the Lim itations of the Optimal Reciprocal Collision Avoidance Algorithm for Mobile Robo ts In Crowds)

同济大学的研究揭示了机器人学的新发现(放宽了人群中移动机器人最优互避碰撞算法的限制)

Robotics & Machine Learning Daily News2024,Issue(Jun.4) :28-28.

Studies from Tongji University Reveal New Findings on Robotics (Relaxing the Lim itations of the Optimal Reciprocal Collision Avoidance Algorithm for Mobile Robo ts In Crowds)

同济大学的研究揭示了机器人学的新发现(放宽了人群中移动机器人最优互避碰撞算法的限制)

扫码查看

摘要

Robotics&Machine Learning Daily News的一位新闻记者兼工作人员新闻编辑-机器人的新研究是回购的主题。根据NewsRx编辑在中华人民共和国上海的新闻报道,研究称,“最优互惠避碰(或CA)算法被广泛用于避碰场景S中的代理建模。然而,由于受到一些限制,例如不适当的互惠假设,即每个代理应该承担一半的碰撞Avoi舞蹈责任,基于ORCA的移动机器人在人群中的表现并不理想。本研究经费来源于国家自然科学基金(NSFC)。我们的新闻记者从同济大学的研究中得到一句话:“在这封信中,为了放宽这些限制,”首先从原理域简化ORCA的规划过程,解决ORCA在某些情况下不可解的问题,然后基于深度强化学习(DRL),同时求解了ORCA的逃逸速度和避碰责任,解决了其他工作中只探索责任而导致局部最优的限制,并与不同数目的环境下的基线进行了比较。行人和测试在不同的现实世界场景。

Abstract

By a News Reporter-Staff News Editor at Robotics & Machine Learning Daily News – New research on Robotics is the subject of a repo rt. According to news reporting out of Shanghai, People’s Republic of China, by NewsRx editors, research stated, “The Optimal Reciprocal Collision Avoidance (OR CA) algorithm is widely used for modeling agents in collision avoidance scenario s. However, suffering from limitations such as the improper reciprocal assumptio n that each agent is supposed to take half the responsibility for collision avoi dance, the performance of ORCA-based mobile robots in crowds is not ideal.” Financial support for this research came from National Natural Science Foundatio n of China (NSFC). Our news journalists obtained a quote from the research from Tongji University, “In this letter, to relax these limitations, we firstly simplify the planning pr ocess of ORCA from the principle horizon to solve ORCA being unsolvable in some cases. Then the escape velocity and collision avoidance responsibility are explo red simultaneously based on deep reinforcement learning (DRL) to solve the limit ation of local optimum caused by only exploring the responsibility in other work s. We compare our method with baselines in environments with different numbers o f pedestrians and test in different real-world scenarios.”

Key words

Shanghai/People’s Republic of China/As ia/Algorithms/Emerging Technologies/Machine Learning/Nano-robot/Robotics/T ongji University

引用本文复制引用

出版年

2024
Robotics & Machine Learning Daily News

Robotics & Machine Learning Daily News

ISSN:
段落导航相关论文