Cognitive approach for human-robot collaborative assembly scene based on multi-scale object detection
The rapid understanding of Human-Robot Collaborative(HRC)assembly scene is of great practical signifi-cance to improve the cognitive ability of collaborative robot and realize HRC assembly.Aiming at the problems of large difference of object scale and the lack of unified scene description framework in the cognitive process of un-structured HRC assembly scene,a Lightweight Multi-Scale object detection Network(LMS-Net)was constructed,and the anchor clustering mechanism was introduced in the network training process to improve the accuracy of multi-scale object detection.Then,the LMS-Net detection results were converted into human-object interaction graph,and a meta-description model of HRC assembly scene was established.A cognitive method of HRC assembly scene based on multi-scale object detection was proposed.Experimental results on the self-built dataset HRC-Action showed that the proposed multi-scale object detection network had high accurate(average 89%)and faster speed(average 58.7FPS on deep learning workstation,average 25FPS on Jetson Nano B01),and the proposed HRC as-sembly scene cognitive method had good feasibility and practicability.