Action recognition algorithm based on joint feature-enhanced graph convolution
In recent years,human action recognition has seen widespread application in fields such as intelligent security and surveillance,medical rehabilitation,and smart homes,making it a prominent research topic in computer vision.Common methods typically involve extracting features from both skeleton data and RGB video data before fusing them.However,these methods often fail to consider the correlation between RGB and skeletal features,leading to suboptimal performance in certain similar scene environments.To address this issue,an action recognition algorithm based on a Joint Features Enhanced Graph Convolutional Network(JFE-GCN)is proposed.First,a joint feature extraction module(JFEM)processes the RGB video to capture movement details of the human body.Subsequently,through the joint feature-enhanced graph convolutional module(JFE-GC),local motion information surrounding the human body's joint points in the RGB video is extracted as joint features,establishing a connection between joint region features and skeletal features,thereby enhancing the skeletal topology in the graph convolution.Extensive experiments have demonstrated the effectiveness of the JFE-GCN method,significantly improving action recognition accuracy.