首页|MTCEA: Guiding Multi-Modal Entity Alignment via Entity-Type Information

MTCEA: Guiding Multi-Modal Entity Alignment via Entity-Type Information

扫码查看
Multi-modal entity alignment aims to identify equivalent entities across diverse knowledge graphs by leveraging multiple modalities of entity information. This process is crucial for the fusion of multi-modal knowledge graphs. While current research primarily investigates how to utilize side information from entity visuals, relations, and attributes, it often overlooks the significant role of entity-type information. Furthermore, multi-modal data embedding encounters noise that negatively impacts the performance of the entity alignment task. To address these gaps, this paper introduces MTCEA, a multi-modal entity alignment method guided by entity-type information. The proposed method captures the constraints associated with entities based on the entity-type information obtained from knowledge graph ontology; then, it utilizes two embedding strategies for type constraints to enhance the model's performance in knowledge representation. This allows effective modal fusion that integrates more finegrained semantic constraints related to types, which improves the alignment accuracy across various cross-lingual knowledge graphs. MTCEA is validated on three subsets of DBP15K. Experimental results demonstrate that our model achieves good results overall on the Hits@1, Hits@10, and MRR metrics. In an experimental setting without using entity name, MTCEA outperforms state-of-the-art baselines.

Entity alignmententity typemulti-modal knowledge graphtype association constraints

Xiaoming Zhang、Ziyi Zheng、Huiyong Wang、Mehdi Naseriparsa

展开 >

School of Information Science and Engineering Hebei University of Science and Technology Shijiazhuang, P. R. China

Institute of Innovation, Science and Sustainability Federation University||University Drive, Mt Helen, Ballarat, Australia

2025

International journal of software engineering and knowledge engineering
  • 45