Multi-modal entity alignment model based on adaptive fusion technology
Multi-modal entity alignment aims to identify equivalent entities between different multi-modal knowledge graphs composed of structured triples and images associated with entities.The existing research on multi-modal entity alignment main-ly focuses on multi-modal fusion strategies,ignoring the problems of modal imbalance and difficulty in integrating different mo-dalities,and fails to fully utilize multi-modal information.To solve these problems,this paper proposed the MACEA model,this model used the multi-modal variational autoencoder method to actively complete the missing modal information,the dy-namic modal fusion method to integrate and complement the information of different modalities,and the inter-modal contrastive learning method to model the inter-modal relations.These methods effectively solve the problems of modal missing and the dif-ficulty in modal fusion.Compared with the baseline model,MACEA improves the hits@1 and MRR indicators by 5.72%and 6.78%,respectively.The experimental results show that the proposed method can effectively identify aligned entity pairs,with high accuracy and practicality.