首页|Reports Outline Robotics Study Findings from Guangxi University (A Pixel-level G rasp Detection Method Based On Efficient Grasp Aware Network)
Reports Outline Robotics Study Findings from Guangxi University (A Pixel-level G rasp Detection Method Based On Efficient Grasp Aware Network)
扫码查看
点击上方二维码区域,可以放大扫码查看
原文链接
NETL
NSTL
By a News Reporter-Staff News Editor at Robotics & Machine Learning Daily News Daily News – Data detailed on Robotics have been pr esented. According to news reporting originating from Nanning, People’s Republic of China, by NewsRx correspondents, research stated, “This work proposes a nove l grasp detection method, the Efficient Grasp Aware Network (EGA-Net), for robot ic visual grasp detection. Our method obtains semantic information for grasping through feature extraction.” Financial support for this research came from National Natural Science Foundatio n of Guangxi Province. Our news editors obtained a quote from the research from Guangxi University, “It efficiently obtains feature channel weights related to grasping tasks through t he constructed ECA-ResNet module, which can smooth the network’s learning. Meanw hile, we use concatenation to obtain low-level features with rich spatial inform ation. Our method inputs an RGB-D image and outputs the grasp poses and their qu ality score. The EGA-Net is trained and tested on the Cornell and Jacquard datas ets, and we achieve 98.9% and 95.8% accuracy, respec tively. The proposed method only takes 24 ms for real-time performance to proces s an RGB-D image. Moreover, our method achieved better results in the comparison experiment. In the real-world grasp experiments, we use a 6-degree of freedom ( DOF) UR-5 robotic arm to demonstrate its robust grasping of unseen objects in va rious scenes. We also demonstrate that our model can successfully grasp differen t types of objects without any processing in advance.”
NanningPeople’s Republic of ChinaAsi aEmerging TechnologiesMachine LearningRoboticsRobotsGuangxi University