首页|Robust Regularization Design of Graph Neural Networks Against Adversarial Attacks Based on Lyapunov Theory
Robust Regularization Design of Graph Neural Networks Against Adversarial Attacks Based on Lyapunov Theory
扫码查看
点击上方二维码区域,可以放大扫码查看
原文链接
NETL
NSTL
万方数据
The robustness of graph neural networks(GNNs)is a critical research topic in deep learning.Many researchers have designed regularization methods to enhance the robustness of neural networks,but there is a lack of theoretical analysis on the principle of robustness.In order to tackle the weakness of current robustness designing methods,this paper gives new insights into how to guarantee the robustness of GNNs.A novel regularization strategy named Lya-Reg is designed to guarantee the robustness of GNNs by Lyapunov theory.Our results give new insights into how regularization can mitigate the various adversarial effects on different graph signals.Extensive experiments on various public datasets demonstrate that the proposed regularization method is more robust than the state-of-the-art methods such as L1-norm,L2-norm,L21-norm,Pro-GNN,PA-GNN and GARNET against various types of graph adversarial attacks.
Deep learningGraph neural networkRobustnessLyapunovRegularization
Wenjie YAN、Ziqi LI、Yongjun QI
展开 >
School of Artificial Intelligence,Hebei University of Technology,Tianjin 300401,China
School of Computer Science and Engineering of North China Institute of Aerospace Engineering,Langfang 065000,China
National Natural Science Foundation of ChinaDoctoral Fund of North China Institute of Aerospace Engineering