首页|一种新的增量式图神经网络学习框架

一种新的增量式图神经网络学习框架

A new incremental graph neural network learning framework

扫码查看
近年来,图神经网络(graph neural networks,GNN)因其在图表示学习上的优越性能,引起了众多学者的关注.然而,GNN在面对大规模图数据时,由于全图梯度下降,网络在收敛速度方面存在着巨大挑战.鉴于此,提出了一种新的GNN框架:基于采样的增量式图神经网络(sam-pling based incremental graph neural network,IGNN).首先,IGNN利用采样策略将大尺度图拆分为多个小尺度子图,这一操作在关注细节的同时,可以缓解由大规模图的计算消耗所带来的效率问题.其次,为了维护子图之间原有的关联性,IGNN封装了一种增量式的子图共享策略,可以在不同尺度的子图之间共享权重矩阵,从而渐进地获得不同尺度子图的嵌入表示,这使得训练过程的收敛更快.本研究提出的IGNN框架是即插即用的,适用于很多流行的GNN模型.在8个图数据集上进行半监督节点分类的实验结果表明,其在不同标签率下,分类任务准确率占据着明显的优势.特别地,与其他模型相比,IGNN在较少的迭代次数中,实现了更高的分类精度.
Recently,Graph Neural Networks(GNN)have attracted extensive attention from many scholars due to their superior performance in graph representation learning.However,attributed to full-graph gradient descent,GNN faces significant challenges in terms of convergence speed when it comes to large-scale graph data.In view of this,a new GNN framework is developed,which is called Sampling based Incremental Graph Neural Network(IGNN).First,IGNN utilizes a sampling strate-gy to split a large-scale graph into multiple small-scale subgraphs,which not only focuses on de-tailed information but also alleviates the efficiency problem caused by the computational cost of large-scale graph data.Second,in order to maintain the original correlation between subgraphs,IG-NN encapsulates an incremental subgraph sharing strategy,which shares weight matrices between subgraphs of different scales,progressively obtaining embedding representations of subgraphs of dif-ferent scales,thereby enabling faster convergence in the process of training.Our proposed IGNN framework is plug-and-play,which is suitable for various popular GNN models.The experimental re-sults of semi-supervised node classification over 8 graph datasets illustrate that IGNN has a signifi-cant advantage in classification accuracy at different label rates.Specifically,within just a few ep-ochs,IGNN achieve higher classification accuracy compared to other models.

graph neural networksincremental learningnode classificationsemi-supervised learn-ingsampling

杨习贝、丛辉、范燕、王平心

展开 >

江苏科技大学计算机学院,江苏 镇江 212100

江苏科技大学理学院,江苏 镇江 212100

图神经网络 增量学习 节点分类 半监督学习 采样

2024

闽南师范大学学报(自然科学版)
漳州师范学院

闽南师范大学学报(自然科学版)

影响因子:0.272
ISSN:1008-7826
年,卷(期):2024.37(3)