首页|Study Data from Hangzhou Dianzi University Update Knowledge of Androids (A Dynamic Head Gesture Recognition Method for Realtime Intention Inference and Its Application To Visual Human-robot Interaction)

Study Data from Hangzhou Dianzi University Update Knowledge of Androids (A Dynamic Head Gesture Recognition Method for Realtime Intention Inference and Its Application To Visual Human-robot Interaction)

扫码查看
Fresh data on Robotics Androids are presented in a new report. According to news reporting originating from Hangzhou, People’s Republic of China, by NewsRx correspondents, research stated, “Head gesture is a natural and non-verbal communication method for human-computer and human-robot interaction, conveying attitudes and intentions. However, the existing vision-based recognition methods cannot meet the precision and robustness of interaction requirements.” Funders for this research include Key Research and Development Project of Zhejiang Province, Fundamental Research Funds for the Provincial Universities of Zhejiang, National Natural Science Foundation of China (NSFC), Natural Science Foundation of Zhejiang Province. Our news editors obtained a quote from the research from Hangzhou Dianzi University, “Due to the limited computational resources, applying most high-accuracy methods to mobile and onboard devices is challenging. Moreover, the wearable device-based approach is inconvenient and expensive. To deal with these problems, an end-to-end two-stream fusion network named TSIR3D is proposed to identify head gestures from videos for analyzing human attitudes and intentions. Inspired by Inception and ResNet architecture, the width and depth of the network are increased to capture motion features sufficiently. Meanwhile, convolutional kernels are expanded from the spatial domain to the spatiotemporal domain for temporal feature extraction. The fusion position of the two-stream channel is explored under an accuracy/complexity trade-off to a certain extent. Furthermore, a dynamic head gesture dataset named DHG and a behavior tree are designed for human-robot interaction. Experimental results show that the proposed method has advantages in real-time performance on the remote server or the onboard computer. Furthermore, its accuracy on the DHG can surpass most state-of-the-art vision-based methods and is even better than most previous approaches based on head-mounted sensors.”

HangzhouPeople’s Republic of ChinaAsiaAndroidsEmerging TechnologiesHuman-Robot InteractionMachine LearningRobotRoboticsHangzhou Dianzi University

2024

Robotics & Machine Learning Daily News

Robotics & Machine Learning Daily News

ISSN:
年,卷(期):2024.(Feb.9)
  • 39