Adaptive Feature Fusion for Multi-modal Entity Alignment
The recent surge of interactive tasks involving multi-modal data brings a high demand for utilizing knowledge in different modalities.This facilitated the birth of multi-modal knowledge graphs,which aggregate multi-modal knowledge to meet the demands of the tasks.However,they are known to suffer from the knowledge incom-pleteness problem that hinders the utilization of information.To mitigate this problem,it is of great need to im-prove the knowledge coverage via entity alignment.Current entity alignment methods fuse multi-modal informa-tion by fixed weighting,which ignores the different contributions of individual modalities.To solve this challenge,we propose an adaptive feature fusion mechanism,that combines entity structure information and visual informa-tion via dynamic fusion according to the data quality.Besides,considering that low quality visual information and structural difference between knowledge graphs further impact the performance of entity alignment,we design a visual feature processing module to improve the effective utilization of visual information and a triple filtering mod-ule to ease structural differences.Experiments on multi-modal entity alignment indicate that our method outper-forms the state-of-the-arts.