首页|基于多级特征融合和强化学习的多模态实体对齐

基于多级特征融合和强化学习的多模态实体对齐

扫码查看
针对传统实体对齐方法未充分利用多模态信息,且在特征融合时未考虑模态间潜在的交互影响等问题,该文提出了一种多模态实体对齐方法,旨在充分利用实体的不同模态特征,在不同多模态知识图谱中找到等价实体.首先通过不同的特征编码器获得属性、关系、图像和图结构的嵌入表示,同时引入数值模态以增强实体语义信息;其次在特征融合阶段,在对比学习的基础上同时进行跨模态互补性和相关性建模,并引入强化学习优化模型输出,减小获得的联合嵌入和真实模态嵌入之间的异构差异;最后计算两个实体之间的余弦相似度,筛选出候选对齐实体对,并将其迭代加入对齐种子,指导新的实体对齐.实验结果表明,该文所提方法在多模态实体对齐任务中是有效的.
Multi-modal Entity Alignment Based on Multi-level Feature Fusion and Reinforcement Learning
To address the defects of failing to leverage multimodal information and the potential interaction effects between modalities,this study proposes a multimodal entity alignment technique.This approach aims to capitalize on the distinctive modal features of entities to identify comparable entities in disparate multimodal knowledge graphs.Firstly,different feature encoders are used to extract attribute,relation,image and graph structure repre-sentations,and numerical modalities are used to enrich entity semantic information.Secondly,in the feature fusion stage,cross-modal complementarity and relevance modelling are executed simultaneously on the grounds of compar-ative learning.Reinforcement learning is also implemented to enhance the model output and decrease the heterogene-ous disparities between the acquired joint embeddings and the actual modal embeddings.Finally,the cosine similarity between two entities'cosine similarity is analyzed to filter out candidate aligned entity pairs,which are then iteratively added to the alignment seed to direct the new entity alignment.Experimental results demonstrate the effectiveness of the proposed approach in the multimodal entity alignment task.

multimodal knowledge graphrepresentation learningentity alignmentfeature fusion

李华昱、王翠翠、张智康、李海洋

展开 >

中国石油大学(华东)计算机科学与技术学院,山东青岛 266580

多模态知识图谱 表示学习 实体对齐 特征融合

山东省自然科学基金中国石油大学(华东)研究生创新基金

ZR2020MF14022CX04035A

2024

中文信息学报
中国中文信息学会,中国科学院软件研究所

中文信息学报

CSTPCDCHSSCD北大核心
影响因子:0.8
ISSN:1003-0077
年,卷(期):2024.38(9)
  • 2