Semantic-Consistent and Multilayer Similarity Based Cross-Modal Hashing Retrieval
[Objective]This paper proposes a retrieval method for learning the rich semantic representations through associated labels and retaining more discriminative information in hash codes.It considers cross-modal semantic similarity,maintains relevance between different modalities,and better bridges the modal gaps.[Methods]Under the constraint of multi-label association,we explored the common semantic information and the hidden class semantic structure of different modalities.Then,we adopted the asymmetric learning framework for joint similarity measurement of high-level and low-level semantics,thereby quantifying to obtain more discriminative hash codes.[Results]We conducted experiments on three multi-modal benchmark datasets:MIRFlickr-25K,IAPR TC-12,and NUS-WIDE,comparing the proposed method with seven other methods.Under five different code lengths,the average MAP values of the proposed method were 2.1%,5.8%,and 2.1%higher than the baseline's maximum value,respectively.[Limitations]The proposed method is more applicable to multi-label datasets and has some deficiencies in mining the semantic relevance of single-label data.[Conclusions]The proposed method maintains the consistency of sample and class semantic structures,fully explores the inherent modal features,and effectively improves retrieval performance.