首页|Inductive Lottery Ticket Learning for Graph Neural Networks
Inductive Lottery Ticket Learning for Graph Neural Networks
扫码查看
点击上方二维码区域,可以放大扫码查看
原文链接
万方数据
维普
Graph neural networks(GNNs)have gained increasing popularity,while usually suffering from unaffordable computations for real-world large-scale applications.Hence,pruning GNNs is of great need but largely unexplored.The re-cent work Unified GNN Sparsification(UGS)studies lottery ticket learning for GNNs,aiming to find a subset of model pa-rameters and graph structures that can best maintain the GNN performance.However,it is tailed for the transductive set-ting,failing to generalize to unseen graphs,which are common in inductive tasks like graph classification.In this work,we propose a simple and effective learning paradigm,Inductive Co-Pruning of GNNs(ICPG),to endow graph lottery tickets with inductive pruning capacity.To prune the input graphs,we design a predictive model to generate importance scores for each edge based on the input.To prune the model parameters,it views the weight's magnitude as their importance scores.Then we design an iterative co-pruning strategy to trim the graph edges and GNN weights based on their impor-tance scores.Although it might be strikingly simple,ICPG surpasses the existing pruning method and can be universally applicable in both inductive and transductive learning settings.On 10 graph-classification and two node-classification benchmarks,ICPG achieves the same performance level with 14.26%-43.12%sparsity for graphs and 48.80%-91.41%spar-sity for the GNN model.