首页|NLGAE:一种基于改进网络结构及损失函数的图自编码器节点分类模型

NLGAE:一种基于改进网络结构及损失函数的图自编码器节点分类模型

扫码查看
利用图嵌入方法将图的拓扑结构、节点属性等高维异构信息映射到稠密的向量空间,是解决图数据由非欧空间性带来的计算不友好、邻接矩阵的高度空间复杂性等问题的主流方法.在对经典图自编码器模型GAE与VGAE所存在的问题进行分析的基础上,尝试从编码器、解码器及损失函数3个方面对基于图自编码器的图嵌入方法进行改进,提出一种基于改进网络结构及损失函数的图自编码器模型NLGAE.首先,在模型结构设计上,一方面将编码器中堆叠的图卷积层倒置,以解决GAE与VGAE中无参Decoder缺乏灵活性并且表达能力不足的问题,另一方面引入注意力机制的图卷积网络GAT来解决节点之间的权重系数固化的问题;其次,重新设计的损失函数能够同时考虑到图结构与节点特征属性两部分信息.对比实验结果表明:NLGAE作为一种无监督模型,能够学习到高质量的节点嵌入特征,在下游节点分类任务上优于DeepWalk,GAE,Grpah-MAE,GATE等经典无监督模型,并且在选择合适分类模型的情况下,甚至优于GAT和GCN等有监督的图神经网络模型.
NLGAE:A Graph Autoencoder Model Based on Improved Network Structure and Loss Function for Node Classification Task
The universally accepted technique to address the issues of computational complexity and high spatial complexity of ad-jacency matrix due to non-Euclidean spatiality of graph data is to use graph embedding methods to map high-dimensional hetero-geneous information,such as graph topology and node attributes,to dense vector space.In this paper,based on the analysis of the problems of the classical graph auto-encoder model GAE(graph auto-encoder)and VGAE(variational graph auto-encoder),we try to improve the graph embedding method based on graph auto-encoder from three aspects:encoder,decoder and loss function,and propose a graph auto-encoder model NLGAE based on the improved network structure and loss function.First,in the model structure design,on the one hand,the stacked graph convolutional layers in the encoder are inverted to solve the problem of lack of flexibility and insufficient expressiveness of the non-reference decoder in GAE and VGAE,on the other hand,the graph convo-lutional network GAT is introduced to solve the problem of solidifying the weight coefficients between nodes by introducing the attention mechanism.Second,both the graph structure and the node feature information could be taken into account by the rede-signed loss function.The comparative experimental results show that,as an unsupervised model,the proposed NLGAE can learn high-quality node embedding features and outperform not only traditional unsupervised models DeepWalk,GAE,GrpahMAE,GATE,etc.in node classification tasks,but also supervised graph neural network models such as GAT and GCN in the case of se-lecting an appropriate classification model.

Graph representation learningGraph auto-encoderAttention mechanismNode classification

廖彬、张陶、于炯、李敏

展开 >

贵州财经大学大数据统计学院 贵阳 550025

贵州中医药大学信息工程学院 贵阳 550025

新疆大学信息科学与工程学院 乌鲁木齐 830008

图表示学习 图自编码器 注意力机制 节点分类

国家自然科学基金新疆天山青年计划项目

615620782018Q073

2024

计算机科学
重庆西南信息有限公司(原科技部西南信息中心)

计算机科学

CSTPCD北大核心
影响因子:0.944
ISSN:1002-137X
年,卷(期):2024.51(10)