首页|低高频多尺度融合的跨模态行人重识别研究

低高频多尺度融合的跨模态行人重识别研究

扫码查看
跨模态行人重识别技术在公共安全、灾难响应、犯罪现场勘查等方面有广阔的应用前景.为了高效利用不同模态之间多样化信息,探索有效的行人重识别方法,提出一种结合低高频信息的多尺度融合(multiple frequence multi-scale embedding,MFME)模型.通过多尺度信息融合(multi-scale information fusion,MIF)模块从多个尺度捕获行人特征,分别获得图像的全局结构和局部细节特征;通过低高频特征聚合(multi frequency feature embedding,MFFE)模块聚合行人的多频信息,确保模型在面对环境变化和不同光照条件下依然能保持高准确度以适应模态变化.实验结果表明,提出的模型在公开数据集SYSU-MM01 上Rank-1 和mAP识别率分别达到 75.79%和 72.02%.该模型有效挖掘和利用了跨模态间的多样化信息,提高了行人的重识别率,能更好地适应多变的实际应用环境.
Research on cross-modal pedestrian re-identification based on low-high frequency multi-scale fusion
Cross-modal pedestrian re-identification(Re-ID)technology holds vast application potential in public security,disaster response,and crime scene investigations.To efficiently utilize the diverse information between different modalities and explore effective Re-ID methods,a multiple frequency multi-scale embedding(MFME)model is proposed.This model integrates the multi-scale information fusion(MIF)module to capture pedestrian features across various scales,extracting both global structural and local detail characteristics of images.Additionally,the multi-frequency feature embedding(MFFE)module aggregates multi-frequency information of pedestrians,ensuring the model maintains high accuracy under varying environmental and lighting conditions to accommodate modality shifts.Experimental results demonstrate that the pro-posed MFME model achieves a Rank-1 accuracy of 75.79%and mAP of 72.02%on the public dataset SYSU-MM01.The model effectively exploits the diverse cross-modal information,enhancing pedestrian re-identification accuracy and adapting better to dynamic real-world scenarios.

cross-modalpedestrian re-identificationdeep learningmulti-scale information fusionlow-high frequency feature aggregation

朱沛伍、高树辉

展开 >

中国人民公安大学 侦查学院,北京 100038

跨模态 行人重识别 深度学习 多尺度信息融合 低高频特征聚合

2024

重庆邮电大学学报(自然科学版)
重庆邮电大学

重庆邮电大学学报(自然科学版)

CSTPCD北大核心
影响因子:0.66
ISSN:1673-825X
年,卷(期):2024.36(6)