Multi-Scale Fusion and Dual Output U-Net Network for Person Re-Identification
Due to variable pedestrian posture,the occlusion of pedestrians,and other adverse factors,Person Re-Identification(Re-ID)models often struggle to extract the key features of pedestrians.To enhance the feature expression ability of the model,this study proposes a Re-ID method based on multi-scale fusion and a dual output U-Net network.The aim is to address the challenges of extracting key pedestrian features and improving feature expression,which are limitations of existing methods.First,a multi-scale fusion dual output U-Net network is proposed,with the output features constrained by both Euclidean and divergence distances.Second,a joint loss function is introduced to address the challenge of the Generative Adversarial Network(GAN)not converging easily during training,thereby improving the convergence speed of the training process.Numerous simulation experiments conducted on three public reference datasets demonstrate that the proposed feature extraction network outperforms classical feature extraction networks,with an improvement in Mean Average Precision(mAP)of over 10%.In addition,the mAP of the proposed Re-ID method improves by approximately 2%compared to that of the mainstream method.The proposed method can enhance the feature expression ability of the model and improve the accuracy of Re-ID.
Person Re-Identification(Re-ID)Generative Adversarial Network(GAN)feature extractionmulti-scale fusionjoint constraint