Gait recognition with united local multiscale and global context features
Existing gait recognition methods can extract rich gait information in the spatial dimension.However,they often overlook fine-grained temporal features within local regions and temporal contextual information across different sub-regions.Considering that gait recognition is a fine-grained recognition problem,and each individual's gait carries unique temporal context information,we propose a gait recognition method that combines local multiscale and global contextual temporal features.The entire gait sequence is divided into multiple time resolutions and fine-grained tempor-al features within local sub-sequences are extracted.Transformer is used to extract temporal context information among different subsequences,and the global features are formed by integrating all subsequences based on the contextual in-formation.We have conducted extensive experiments on two public datasets.The proposed model achieves rank-1 ac-curacies of 98.0%,95.4%,and 87.0%on three walking conditions of the CASIA-B dataset.On the OU-MVLP dataset,the model achieves a rank-1 accuracy of 90.7%.The method proposed in this paper has achieved state-of-the-art results and can provide reference for other gait recognition methods.