首页|改进的掩码图自编码器模型

改进的掩码图自编码器模型

扫码查看
图自编码器(GAE)作为深度学习领域的重要模型之一,近年来受到了广泛关注.但GAE倾向于以牺牲图的结构信息为代价过度强调邻近信息,使其不适用于链接预测之外的下游任务.针对传统GAE存在的问题,研究者们在图自编码器模型中引入掩码策略,形成掩码图自编码器模型处理图数据.基于此,提出改进的掩码图自编码器(MaskGAE)模型,Mask-GAE采用掩码图模型(MGM)作为代理任务,掩蔽一部分边,并尝试用部分可见的、未掩蔽的图结构来重建丢失的部分.在Cora数据集上通过调参将MaskGAE模型节点分类准确率提升了0.5%.
Improved Mask Graph Autoencoders Model
As one of the important models in the field of deep learning,graph autoencoder(GAE)has re-ceived extensive attention in recent years.However,GAE tends to overemphasize proximity infor-mation at the expense of structural information of the graph,making it unsuitable for downstream tasks other than link prediction.In view of the problems existing in traditional GAE,researchers introduce mask autoencoder(MAE)into the generative self-supervised learning model represented by GAE.Based on this,this article proposed the mask graph autoencoder(MaskGAE),which uses the mask graph model(MGM)as an agent task to mask part of the edges,and tries to reconstruct the missing part with a partially visible,unmasked graph structure.In this paper,the node classifi-cation accuracy of the MaskGAE model is improved from 84.05%to 84.55%and the accuracy is increased by 0.5%by adjusting parameters on the Cora dataset.

Autoencoderself-supervised learningMasked graph modelinggraph-structured data

严鑫瑜、庞慧、石瑞雪、张爱玲、陈威

展开 >

河北建筑工程学院,河北张家口 075000

张家口市大数据技术创新中心,河北张家口 075000

编码器 自监督学习 掩码图模型 图结构数据

2024

河北建筑工程学院学报
河北建筑工程学院

河北建筑工程学院学报

影响因子:0.502
ISSN:1008-4185
年,卷(期):2024.42(1)
  • 27