首页|基于图卷积与规则匹配的单兵动作识别

基于图卷积与规则匹配的单兵动作识别

扫码查看
针对基于骨架数据的动作识别方法存在语义理解方面不足,以及骨架数据获取不全导致识别准确率较低的问题,本文提出了一种基于融合语义分析的图卷积与规则匹配的单兵动作识别方法.首先,使用OpenPose姿态估计模型对士兵作战视频进行骨骼关键点提取;然后,根据有效骨骼关键点提取情况,动态选择基于目标检测模型(You Only Look Once,YOLO)的单兵动作识别方法或基于图卷积的动作识别方法;最后,针对图卷积网络置信度不高的判别结果引入规则匹配算法进一步完成单兵动作识别判定.实验结果表明,与时空图卷积网络(Spatial Temporal Graph Convolutional Networks,ST-GCN)算法和双流自适应图卷积网络(Two-Stream Adaptive Graph Convolutional Networks,2s-AGCN)算法相比,该方法在单兵动作识别任务中准确率分别提高了约38%与11%.
Single Soldier Action Recognition Based on Graph Convolution and Rule Matching
Aiming at the problem that the action recognition method based on skeleton data has insufficient semantic understanding and incomplete acquisition of skeleton data, resulting in low recognition accuracy, a single soldier action recognition method based on graph convolution and rule matching based on fusion semantic analysis is proposed. Firstly, the OpenPose pose estimation model is used to extract the skeletal key points of the soldier combat video. Then, according to the effective skeleton key point extraction, the You Only Look Once ( YOLO) based individual action recognition method or the graph convolution based action recognition method are dynamically selected. Finally, a rule matching algorithm is introduced to further complete individual action recognition and judgment for the low confidence discrimination results of graph convolutional network. Experimental results show that compared with Spatial Temporal Graph Convolutional Networks( ST-GCN) algorithm and Two-Stream Adaptive Graph Convolutional Networks ( 2s-AGCN ) lgorithm, this method improves the accuracy of individual action recognition tasks by about 38% and 11%.

action recognitionsemantic analysisgraph convolutionalrule matchingOpenPoseYOLO

童立靖、冯金芝、英溢卓、曹楠

展开 >

北方工业大学 信息学院,北京100144

动作识别 语义分析 图卷积 规则匹配 OpenPose 目标检测模型(YOLO)

北京市市属高校青年拔尖人才培养计划

CIT&TCD201904009

2024

北方工业大学学报
北方工业大学

北方工业大学学报

影响因子:0.368
ISSN:1001-5477
年,卷(期):2024.36(1)
  • 12