首页|Hierarchical policy with deep-reinforcement learning for nonprehensile multiobject rearrangement
Hierarchical policy with deep-reinforcement learning for nonprehensile multiobject rearrangement
扫码查看
点击上方二维码区域,可以放大扫码查看
原文链接
维普
万方数据
Nonprehensile multiobject rearrangement is the robotic task of planning feasible paths and transferring multiple objects to their predefined target poses without grasping.It must consider how each object reaches the target and the order in which objects move,considerably increasing the complexity of the problem.Thus,we propose a hierarchical policy for nonprehensile multiobject rearrangement based on deep-reinforcement learning.We use imitation learning and reinforcement learning to train a rollout policy.In a high-level policy,the policy network directs the Monte Carlo tree search algorithm to efficiently seek the ideal rearrangement sequence for several items.In a low-level policy,the robot plans the paths according to the order of path primitives and manipulates the objects to approach the target poses one by one.Our experiments show that the proposed method has a higher success rate,fewer steps,and shorter path length than the state-of-the-art methods.
RearrangementReinforcement learningMonte Carlo tree search
Fan Bai、Fei Meng、Jianbang Liu、Jiankun Wang、Max Q.-H.Meng
展开 >
Department of Electronic Engineering,The Chinese University of Hong Kong,Shatin N.T.,Hong Kong SAR,China
Department of Electronic and Electrical Engineering,Southern University of Science and Technology,Shenzhen,China
Shenzhen Research Institute of the Chinese University of Hong Kong,Shenzhen,China
Shenzhen Key Labora-tory of Robotics Perception and Intelligence,China