首页|基于随机游走的图神经网络黑盒对抗攻击

基于随机游走的图神经网络黑盒对抗攻击

扫码查看
图神经网络在许多图分析任务中取得了显著的成功.然而,最近的研究揭示了其对对抗性攻击的易感性.现有的关于黑盒攻击的研究通常要求攻击者知道目标模型的所有训练数据,并且不适用于攻击者难以获得图神经网络节点特征表示的场景.文章提出了一种更严格的黑盒攻击模型,其中攻击者只知道选定节点的图结构和标签,但不知道节点特征表示.在这种攻击模型下,文章提出了一种针对图神经网络的黑盒对抗攻击方法.该方法近似每个节点对模型输出的影响,并使用贪心算法识别最优扰动.实验表明,虽然可用信息较少,但该算法的攻击成功率接近最先进的算法,同时实现了更高的攻击速度.此外,该攻击方法还具有迁移和防御能力.
A Random Walk Based Black-Box Adversarial Attack against Graph Neural Network
Graph neural networks have achieved remarkable success on many graph analysis tasks.However,recent studies have unveiled their susceptibility to adversarial attacks.The existing research on black box attacks often requires attackers to know all the training data of the target model,and is not applicable in scenarios where attackers have difficulty obtaining feature representations of graph neural network nodes.This paper proposed a more strict black-box attack model,where the attacker only possessed knowledge of the graph structure and labels of select nodes,but remained unaware of node feature representations.Under this attack model,this paper proposed a black-box adversarial attack method against graph neural networks.The approach approximated the influence of each node on the model output and identified optimal perturbations with greedy strategy.Experiments show that though less information is available,the attack success rate of this algorithm is close to that of the state-of-the-art algorithms,while achieving a higher attack speed.In addition,the attack method in this article also has migration and anti-defense capabilities.

artificial intelligence securitygraph neural networkadversarial attacks

芦效峰、程天泽、龙承念

展开 >

北京邮电大学网络空间安全学院,北京 100876

上海交通大学电子信息与电气工程学院,上海 200240

人工智能安全 图神经网络 对抗攻击

国家自然科学基金

62136006

2024

信息网络安全
公安部第三研究所 中国计算机学会计算机安全专业委员会

信息网络安全

CSTPCDCHSSCD北大核心
影响因子:0.814
ISSN:1671-1122
年,卷(期):2024.24(10)