Multimodal Entity Alignment Based on Relation-aware Multi-subgraph Graph Neural Network
Multi-modal entity alignment(MMEA)is a crucial technique for integrating multi-source heterogeneous multi-modal knowledge graphs(MMKGs).This integration is typically achieved by encoding graph structure and calculating the plausibility of multi-modal representation between entities.However,existing MMEA methods tend to directly employ pre-trained models and overlook the fusion between modalities as well as the fusion between modal features and graph structures.To address these limitations,this study proposes a novel approach called relation-aware multi-subgraph graph neural network(RAMS)for obtaining multi-modal representation in the context of entity alignment.RAMS utilizes a multi-subgraph graph neural network for fusing modality information and graph structure to derive entity representation.The alignment results are subsequently obtained through cross-domain similarity calculation.Extensive experiments demonstrate that RAMS outperforms baseline models in terms of accuracy,efficiency,and robustness.