Research on Person Re-Identification Method for Large-Angle Viewpoint Differences
Person Re-Identification(Re-ID)aims to determine whether a person of interest appeared under different cameras or under the same camera at different times.Owing to the different viewpoints of people captured by different cameras,the accuracy of person Re-ID can be adversely affected.Therefore,in this study,a person Re-ID method based on the fusion of appearance and gait features is proposed to solve the reduced recognition rate caused by different viewpoints of people relative to a camera.Here,viewpoint information is utilized to estimate the importance weights of an RGB image and Gait Energy Image(GEI);subsequently,weighted fusion is performed to overcome the effects of different viewpoints.Specifically,ResNet-50 is first used to extract the features of each image in the image sequence,which are then further aggregated into appearance features via temporal pooling.Second,another ResNet-50 model is used to extract gait features from the GEI images.Third,after estimating the viewpoint of a person,a mapping function is designed to map the viewpoints to the importance weights of the two features.Finally,based on the auto-encoder structure,the two features are weighted and fused under the guidance of importance weights to generate fusion features that are robust to the perspective.Experimental results on the CASIA-B dataset show that the proposed method exhibits significant improvements in terms of the mAP and Rank-1 evaluation metrics for person Re-ID with large-angle viewpoint differences.When tested under large-angle differences,the highest accuracy improvement is 2.7%.
person Re-Identification(Re-ID)viewpoint differencesappearance featuregait featurefeature fusion