Cross-Modal Person Re-identification Driven by Cross-Channel Interactive Attention Mechanism in Dual-Stream Networks
Existing cross-modal person re-identification methods often fail to take into account the difference of target person between modes and within modes,making it difficult to further improve the retrieval accuracy.To solve this problem,this paper introduces the cross-channel interaction attention mechanism to enhance the robust extraction of person features,effectively suppresses the extraction of irrelevant features and achieves more discriminative feature expression.Furthermore,hetero-center triplet loss,triplet loss and identity loss are combined for supervised learning,effectively integrating the inter-modal and intra-class differences in person features.Experimental results demonstrate the effectiveness of the proposed method,which outperforms seven existing methods on two standard datasets,RegDB and SYSU-MM01.