首页|基于互信息自适应的多模态实体对齐方法

基于互信息自适应的多模态实体对齐方法

扫码查看
多模态实体对齐是知识融合过程中的关键一步,但异构的多模态知识图谱拥有较大的结构差异性,并且其多模态信息存在不完全性,利用当前的多模态实体对齐方法无法取得较好的对齐效果.针对上述问题,提出了基于互信息自适应的多模态实体对齐方法.一方面通过设计自适应融合机制来减小模态差异以及依据模态信息的贡献程度动态分配权重,另一方面引入互信息作为附加特征来强化实体的特征表示,最后利用实体相似度计算来进行实体对齐.实验表明,在5个通用的数据集上,MAMEA相较于当前基线模型,指标hits@1最大可提升1.8%,最小可提升1.4%,指标MRR最大可提升1.4%,最小可提升0.8%,证明了该模型可有效地提升多模态实体对齐的效果.
Multi-modal entity alignment method based on adaptive mutual information
Multi-modal entity alignment is a critical step in the process of knowledge fusion.However,heterogeneous multi-modal knowledge graphs exhibit significant structural differences,and their multi-modal information is often incomplete,lea-ding to suboptimal alignment outcomes when using current multi-modal entity alignment methods.To address these issues,this paper proposed a multi-modal entity alignment method based on adaptive mutual information.On the one hand,it designed an adaptive fusion mechanism to reduce modal differences and dynamically assign weights based on the contribution of modal in-formation.On the other hand,it introduced mutual information as an additional feature to enhance the representation of entity features.Finally,it performed entity alignment using entity similarity calculations.Experimental results on five common data-sets show that the MAMEA outperforms current baseline models,with a maximum improvement of 1.8%and a minimum im-provement of 1.4%in the hits@1 metric,and a maximum improvement of 1.4%and a minimum improvement of 0.8%in the MRR metric.These results demonstrate that the proposed model can effectively enhance the performance of multi-modal entity alignment.

multimodal knowledge graphentity alignmentadaptive feature fusioncontrastive representation learningmu-tual information

高永杰、党建武、张希权、郑爱国

展开 >

兰州交通大学光电技术与智能控制教育部重点实验室,兰州 730070

轨道交通信息与控制国家级虚拟仿真教学中心,兰州 730070

兰州交通大学电子与信息工程学院,兰州 730070

中国铁路北京局集团有限公司天津电务段,天津 300143

展开 >

多模态知识图谱 实体对齐 自适应特征融合 对比表示学习 互信息

2025

计算机应用研究
四川省电子计算机应用研究中心

计算机应用研究

北大核心
影响因子:0.93
ISSN:1001-3695
年,卷(期):2025.42(1)