首页|深层图注意力对抗变分自动编码器

深层图注意力对抗变分自动编码器

扫码查看
现有的图自动编码器忽视了图邻居节点的差异和图潜在的数据分布。为了提高图自动编码器嵌入能力,提出图注意力对抗变分自动编码器(AAVGA-d),该方法将注意力引入编码器,并在嵌入训练中使用对抗机制。图注意力编码器实现了对邻居节点权重的自适应分配,对抗正则化使编码器生成的嵌入向量分布接近数据的真实分布。为了加深图注意力层数,设计一种针对注意力网络的随机边删除技术(RDEdge),减少了层数过深引起的过平滑信息丢失。实验结果表明,AAVGA-d的图嵌入能力与 目前流行的图 自动编码器相比具有竞争优势。
DEEP GRAPH ATTENTION ADVERSARIAL VARIATIONAL AUTOENCODER
The existing graph autoencoder ignores the difference between the neighbor nodes of the graph and the potential data distribution of the graph.In order to improve the embedding ability of the graph autoencoder,the graph attention adversarial variational autoencoder(AAVGA-d)is proposed.This method introduced attention to the encoder and used an adversarial mechanism in the embedding training.The graph attention encoder realized the adaptive allocation of the weights of neighbor nodes,and the adversarial regularization made the distribution of the embedding vector generated by the encoder close to the true distribution of the data.In order to deepen the number of graph attention layers,a random edge deletion technology(RDEdge)for attention networks was designed to reduce the loss of over-smooth information caused by excessively deep layers.The experimental results prove that the graph embedding capability of AAVAG-d has a competitive advantage compared with the current popular graph autoencoders.

Graph attentionOver-smoothingAutoencoderAdversarial

翁自强、张维玉、孙旭

展开 >

齐鲁工业大学(山东省科学院)计算机科学与技术学院 山东济南 250353

图注意力 过平滑 自动编码器 对抗

国家重点研发计划项目国家自然科学基金项目山东省自然科学基金项目

2018YFC083170461806105ZR2017MF056

2024

计算机应用与软件
上海市计算技术研究所 上海计算机软件技术开发中心

计算机应用与软件

CSTPCD北大核心
影响因子:0.615
ISSN:1000-386X
年,卷(期):2024.41(9)