Cross-modality person re-identification is widely used in intelligent safety monitoring systems,aiming to match visible light images and infrared images of the same person.Due to the inherent modality differences between visible and infrared modalities,cross-modality person re-identification poses significant challenges in practical applications.To alleviate modality differences,researchers have proposed many effective solutions.However,existing methods extract different modality features without corresponding modality information,resulting in insufficient discriminability of the features.To improve the discriminability of the features extracted from models,this study proposes a cross-modality person re-identification method based on attention feature fusion.By designing an efficient feature extraction network and attention feature fusion module,and optimizing multiple loss functions,the fusion and alignment of different modality information can be achieved,thereby promoting the model matching accuracy for persons.Experimental results show that this method achieves great performance on multiple datasets.
关键词
跨模态行人重识别/注意力机制/特征融合/模态差异/模态对齐
Key words
cross-modality person re-identification/attention mechanism/feature fusion/modality difference/modality alignment