Multi-modal entity alignment is a critical step in the process of knowledge fusion.However,heterogeneous multi-modal knowledge graphs exhibit significant structural differences,and their multi-modal information is often incomplete,lea-ding to suboptimal alignment outcomes when using current multi-modal entity alignment methods.To address these issues,this paper proposed a multi-modal entity alignment method based on adaptive mutual information.On the one hand,it designed an adaptive fusion mechanism to reduce modal differences and dynamically assign weights based on the contribution of modal in-formation.On the other hand,it introduced mutual information as an additional feature to enhance the representation of entity features.Finally,it performed entity alignment using entity similarity calculations.Experimental results on five common data-sets show that the MAMEA outperforms current baseline models,with a maximum improvement of 1.8%and a minimum im-provement of 1.4%in the hits@1 metric,and a maximum improvement of 1.4%and a minimum improvement of 0.8%in the MRR metric.These results demonstrate that the proposed model can effectively enhance the performance of multi-modal entity alignment.
关键词
多模态知识图谱/实体对齐/自适应特征融合/对比表示学习/互信息
Key words
multimodal knowledge graph/entity alignment/adaptive feature fusion/contrastive representation learning/mu-tual information