计算机应用与软件2024,Vol.41Issue(9) :156-165.DOI:10.3969/j.issn.1000-386x.2024.09.023

深层图注意力对抗变分自动编码器

DEEP GRAPH ATTENTION ADVERSARIAL VARIATIONAL AUTOENCODER

翁自强 张维玉 孙旭
计算机应用与软件2024,Vol.41Issue(9) :156-165.DOI:10.3969/j.issn.1000-386x.2024.09.023

深层图注意力对抗变分自动编码器

DEEP GRAPH ATTENTION ADVERSARIAL VARIATIONAL AUTOENCODER

翁自强 1张维玉 1孙旭1
扫码查看

作者信息

  • 1. 齐鲁工业大学(山东省科学院)计算机科学与技术学院 山东济南 250353
  • 折叠

摘要

现有的图自动编码器忽视了图邻居节点的差异和图潜在的数据分布.为了提高图自动编码器嵌入能力,提出图注意力对抗变分自动编码器(AAVGA-d),该方法将注意力引入编码器,并在嵌入训练中使用对抗机制.图注意力编码器实现了对邻居节点权重的自适应分配,对抗正则化使编码器生成的嵌入向量分布接近数据的真实分布.为了加深图注意力层数,设计一种针对注意力网络的随机边删除技术(RDEdge),减少了层数过深引起的过平滑信息丢失.实验结果表明,AAVGA-d的图嵌入能力与 目前流行的图 自动编码器相比具有竞争优势.

Abstract

The existing graph autoencoder ignores the difference between the neighbor nodes of the graph and the potential data distribution of the graph.In order to improve the embedding ability of the graph autoencoder,the graph attention adversarial variational autoencoder(AAVGA-d)is proposed.This method introduced attention to the encoder and used an adversarial mechanism in the embedding training.The graph attention encoder realized the adaptive allocation of the weights of neighbor nodes,and the adversarial regularization made the distribution of the embedding vector generated by the encoder close to the true distribution of the data.In order to deepen the number of graph attention layers,a random edge deletion technology(RDEdge)for attention networks was designed to reduce the loss of over-smooth information caused by excessively deep layers.The experimental results prove that the graph embedding capability of AAVAG-d has a competitive advantage compared with the current popular graph autoencoders.

关键词

图注意力/过平滑/自动编码器/对抗

Key words

Graph attention/Over-smoothing/Autoencoder/Adversarial

引用本文复制引用

基金项目

国家重点研发计划项目(2018YFC0831704)

国家自然科学基金项目(61806105)

山东省自然科学基金项目(ZR2017MF056)

出版年

2024
计算机应用与软件
上海市计算技术研究所 上海计算机软件技术开发中心

计算机应用与软件

CSTPCD北大核心
影响因子:0.615
ISSN:1000-386X
段落导航相关论文