首页|基于子图结构语义增强的少样本知识图谱补全

基于子图结构语义增强的少样本知识图谱补全

扫码查看
针对少样本场景下实体表示不充分的问题,提出一种基于子图结构语义增强的少样本知识图谱补全模型.首先,采用注意力机制,提取节点以关系交互为核心的文本语义特征,并提取节点以集群系数为核心的子图结构语义特征;接着,使用前馈神经网络实现实体语义聚合,并利用Transformer网络针对三元组进行编码;最后,通过原型匹配网络来计算链接预测分数.实验表明,所提模型优于所有基于度量学习的基线模型,对比最新基于元学习的基线模型,在NELL-One数据集上Hits@1指标得到改善,Wiki-One数据集上所有指标得到提升,表明所提模型在增强实体表示和提升实体链接的预测效果上是有效的.
Few-Shot Knowledge Graph Completion Based on Subgraph Structure Semantic Enhancement
A model referred to as subgraph structure semantic enhancement for few-shot knowledge graph completion is proposed in addressing the limitations of insufficient semantic representation of entities in few-shot learning contexts.First,an attention mechanism is employed to extract text semantic features of relation interaction,and to extract subgraph structure semantic features of clustering coefficients.Subsequently,entity semantic aggregation is executed through the utilization of a feedforward neural network and a Transformer network is applied to encode triples.Finally,the score for link prediction is computed using the prototype matching network.Experimental results show the proposed model's superiority over metric-learning-based baseline models,outperforming the latest meta-learning-based baseline model in Hits@1 index on the NELL-One dataset.Moreover,across all indices on the Wiki-One dataset,the proposed model delivers optimal results.This demonstrates the proposed model's effectiveness in enhancing entity representation and improving prediction accuracy.

few-shot learning contextsknowledge graph completionclustering coefficientstructural semanticsattention mechanism

杨荣泰、邵玉斌、杜庆治、龙华、马迪南

展开 >

昆明理工大学 信息工程与自动化学院,昆明650500

云南省媒体融合重点实验室,昆明650032

少样本学习场景 知识图谱补全 集群系数 结构语义 注意力机制

云南省媒体融合重点实验室项目

220235205

2024

北京邮电大学学报
北京邮电大学

北京邮电大学学报

CSTPCD北大核心
影响因子:0.592
ISSN:1007-5321
年,卷(期):2024.47(4)