电子与信息学报2024,Vol.46Issue(9) :3662-3671.DOI:10.11999/JEIT240318

结合视觉文本匹配和图嵌入的可见光-红外行人重识别

Visible-Infrared Person Re-identification Combining Visual-Textual Matching and Graph Embedding

张红颖 樊世钰 罗谦 张涛
电子与信息学报2024,Vol.46Issue(9) :3662-3671.DOI:10.11999/JEIT240318

结合视觉文本匹配和图嵌入的可见光-红外行人重识别

Visible-Infrared Person Re-identification Combining Visual-Textual Matching and Graph Embedding

张红颖 1樊世钰 2罗谦 3张涛3
扫码查看

作者信息

  • 1. 中国民航大学电子信息与自动化学院 天津 300300
  • 2. 中国民航大学计算机科学与技术学院 天津 300300
  • 3. 民航成都电子技术有限责任公司 成都 610041
  • 折叠

摘要

对于可见光-红外跨模态行人重识别(Re-ID),大多数方法采用基于模态转换的策略,通过对抗网络生成图像,以此建立不同模态间的相互联系.然而这些方法往往不能有效降低模态间的差距,导致重识别性能不佳.针对此问题,该文提出一种基于视觉文本匹配和图嵌入的双阶段跨模态行人重识别方法.该方法通过上下文优化方案构建可学习文本模板,生成行人描述作为模态间的关联信息.具体而言,在第1阶段基于图片-文本对的预训练(CLIP)模型实现同一行人不同模态间的统一文本描述作为先验信息辅助降低模态差异.同时在第2阶段引入基于图嵌入的跨模态约束框架,设计模态间自适应损失函数,提升行人识别准确率.为了验证所提方法的有效性,在SYSU-MM01和RegDB数据集上进行了大量实验,其中SYSU-MM01数据集上的首次命中(Rank-1)和平均精度均值(mAP)分别达到64.2%,60.2%.实验结果表明,该文所提方法能够提升可见光-红外跨模态行人重识别的准确率.

Abstract

For cross-modal person Re-IDentification(Re-ID)in visible-infrared images,methods using modality conversion and adversarial networks yield associative information between modalities.However,these approaches fall short in effective feature recognition.Thus,a two-stage approach using visual-text matching and graph embedding for enhanced re-identification effectiveness is proposed in this paper.A context-optimized scheme is utilized by the method to construct learnable text templates that generate person descriptions as associative information between modalities.Specifically,in the first stage,unified text descriptions of the same person across different modalities are utilized as prior information,assisting in the reduction of modality differences,based on the Contrastive Language-Image Pre-training(CLIP)model.Meanwhile,in the second stage,a cross-modal constraint framework based on graph embedding is applied,and a modality-adaptive loss function is designed,aiming to improve person recognition accuracy.The method's efficacy has been confirmed through extensive experiments on the SYSU-MM01 and RegDB datasets,with a Rank-1 accuracy of 64.2%and mean Average Precision(mAP)of 60.2%on SYSU-MM01 being achieved,thereby demonstrating significant improvements in cross-modal person re-identification.

关键词

行人重识别/跨模态/图片-文本对的预训练模型/上下文优化/图嵌入

Key words

Person Re-IDentification(Re-ID)/Cross-modal/Contrastive Language-Image Pre-training(CLIP)model/Context optimization/Graph embedding

引用本文复制引用

出版年

2024
电子与信息学报
中国科学院电子学研究所 国家自然科学基金委员会信息科学部

电子与信息学报

CSTPCD北大核心
影响因子:1.302
ISSN:1009-5896
段落导航相关论文