Neural Networks2022,Vol.15216.DOI:10.1016/j.neunet.2022.05.008

Visual context learning based on textual knowledge for image-text retrieval

Qin, Yuzhuo Gu, Xiaodong Tan, Zhenshan
Neural Networks2022,Vol.15216.DOI:10.1016/j.neunet.2022.05.008

Visual context learning based on textual knowledge for image-text retrieval

Qin, Yuzhuo 1Gu, Xiaodong 1Tan, Zhenshan1
扫码查看

作者信息

  • 1. Dept Elect Engn,Fudan Univ
  • 折叠

Abstract

Image-text bidirectional retrieval is a significant task within cross-modal learning field. The main issue lies on the jointly embedding learning and accurately measuring image-text matching score. Most prior works make use of either intra-modality methods performing within two separate modalities or inter-modality ones combining two modalities tightly. However, intra-modality methods remain ambiguous when learning visual context due to the existence of redundant messages. And inter-modality methods increase the complexity of retrieval because of unifying two modalities closely when learning modal features. In this research, we propose an eclectic Visual Context Learning based on Textual knowledge Network (VCLTN), which transfers textual knowledge to visual modality for context learning and decreases the discrepancy of information capacity between two modalities. Specifically, VCLTN merges label semantics into corresponding regional features and employs those labels as intermediaries between images and texts for better modal alignment. Contextual knowledge of those labels learned within textual modality is utilized to guide the visual context learning. Besides, considering the homogeneity within each modality, global features are merged into regional features for assisting in the context learning. In order to alleviate the imbalance of information capacity between images and texts, entities together with relations inside the given caption are extracted and an auxiliary caption is sampled for attaching supplementary messages to textual modality. Experiments performed on Flickr30K and MS-COCO reveal that our model VCLTN achieves best results compared with the state-of-the-art methods. (C) 2022 Elsevier Ltd. All rights reserved.

Key words

Image-text retrieval/Knowledge transfer/Visual context learning/Modal alignment/ATTENTION

引用本文复制引用

出版年

2022
Neural Networks

Neural Networks

EISCI
ISSN:0893-6080
被引量6
参考文献量68
段落导航相关论文