计算机工程与设计2024,Vol.45Issue(8) :2393-2399.DOI:10.16208/j.issn1000-7024.2024.08.021

基于图卷积的无监督跨模态哈希检索算法

Graph convolution based unsupervised hash method for cross-modal retrieval

龙军 邓茜尹 陈云飞 杨展
计算机工程与设计2024,Vol.45Issue(8) :2393-2399.DOI:10.16208/j.issn1000-7024.2024.08.021

基于图卷积的无监督跨模态哈希检索算法

Graph convolution based unsupervised hash method for cross-modal retrieval

龙军 1邓茜尹 1陈云飞 1杨展1
扫码查看

作者信息

  • 1. 中南大学计算机学院大数据研究院,湖南长沙 410083
  • 折叠

摘要

为解决当前无监督跨模态哈希检索在全局相似性矩阵构建和异构数据语义信息融合中存在的困难,提出一种基于图卷积的无监督跨模态哈希检索算法(GCUH).采用分层次聚合的方式,将各个模态的相似性结构编码到全局相似性矩阵中,获得跨模态的成对相似性信息来指导学习.使用图卷积模块融合跨模态信息,消除邻居结构中的噪声干扰,形成完备的跨模态表征,提出两种相似性保持的损失函数约束哈希码的一致性.与基线模型相比,GCUH在NUS-WIDE数据集上使用64位哈希码执行文本检索图片任务的检索精度提升了 6.3%.

Abstract

To solve the difficulties in constructing global similarity structure and learning the semantic information of heterogeneous data during the unsupervised cross-modal hashing retrieval,a graph convolution based unsupervised hashing method for cross-modal retrieval(GCUH)was proposed.The hierarchical aggregation was utilized to encode the similarity structure of each mo-dality into a global similarity matrix and the pairwise similarity information across modalities was obtained to guide the learning process.The graph convolution module was introduced to fuse the cross-modal information and the noise in the neighbor struc-ture was eliminated.Two similarity preservation losses were applied to constrain the consistency of learned hash code.Compared to the baseline models,GCUH achieves a 6.3%improvement in retrieval accuracy on the NUS-WIDE dataset using 64-bit hash codes for the text retrieval image task.

关键词

哈希学习/跨模态/无监督深度学习/图卷积网络/相似度构建/信息检索/机器学习

Key words

hashing learning/cross-modal/unsupervised deep learning/graph convolution network/similarity construction/information retrieval/machine learning

引用本文复制引用

基金项目

国家自然科学基金项目(62202501)

国家自然科学基金项目(U2003208)

国家重点研发计划基金项目(2021YFB3900902)

湖南省自然科学基金项目(2022JJ40638)

出版年

2024
计算机工程与设计
中国航天科工集团二院706所

计算机工程与设计

CSTPCD北大核心
影响因子:0.617
ISSN:1000-7024
段落导航相关论文