首页|基于近端线性组合的信号识别神经网络黑盒对抗攻击方法

基于近端线性组合的信号识别神经网络黑盒对抗攻击方法

扫码查看
随着深度学习在无线通信领域特别是信号调制识别方向的广泛应用,神经网络易受对抗样本攻击的问题同样影响着无线通信的安全.针对无线信号在通信中难以实时获得神经网络反馈且只能访问识别结果的黑盒攻击场景,提出了一种基于近端线性组合的黑盒查询对抗攻击方法.该方法首先在数据集的一个子集上对每个原始信号样本进行近端线性组合,即在非常靠近原始信号的范围内与目标信号进行线性组合(加权系数不大于0.05),并将其输入待攻击网络以查询识别结果.通过统计网络对全部近端线性组合识别出错的数量,确定每类原始信号最容易受到线性组合影响的特定目标信号,将其称为最佳扰动信号.在攻击测试时,根据信号的类别选择对应最佳扰动信号执行近端线性组合,生成对抗样本.实验结果显示,该方法在选定子集上将每种调制类别的最佳扰动信号添加在全部数据集上能将神经网络识别准确率从94%降至50%,且相较于添加随机噪声攻击的扰动功率更小.此外,生成的对抗样本对于结构近似的神经网络具有一定迁移性.这种方法在统计查询后生成新的对抗样本时,易于实现且无需再进行黑盒查询.
Black-box Adversarial Attack Methods on Modulation Recognition Neural Networks Based on Signal Proximal Linear Combination
With the extensive application of deep learning in the field of wireless communication,especially in signal modulation recognition,the vulnerability of neural networks to adversarial example attacks poses challenges to the security of wireless com-munication.Addressing the black-box attack scenario in wireless signals,where real-time feedback from the neural network is hard to obtain and only recognition results can be accessed,a black-box query adversarial attack method based on proximal linear combination is proposed.Initially,on a subset of the dataset,each original signal undergoes a proximal linear combination with target signals,where they are linearly combined within a range very close to the original signal(with weighting coefficients no greater than 0.05)and then input into the neural network to query.By counting the number of misrecognitions by the network for all proximal linear combinations,specific target signals most susceptible to linear combination effects for each original signal category are determined,which is termed the optimal perturbation signals.During attack testing,adversarial examples are genera-ted by executing proximal linear combinations using the optimal perturbation signal corresponding to the signal category.Experi-mental results demonstrate that using the optimal perturbation signal for each modulation category on the chosen subset,the re-cognition accuracy of the neural network dropped from 94%to 50%when applied to the entire dataset,with a lower perturbation power compared to adding random noise attacks.Furthermore,the generated adversarial examples exhibit some transferability to structurally similar neural networks.This method,which generates new adversarial examples after statistical queries,is easy to implement and eliminates the need for further black-box queries.

Deep learningAdversarial examplesSignal recognitionBlack-box attackAdversarial signal

郭宇琦、李东阳、闫镔、王林元

展开 >

战略支援部队信息工程大学成像与智能处理实验室 郑州 450001

深度学习 对抗样本 信号识别 黑盒攻击 对抗信号

国家自然科学基金

62271504

2024

计算机科学
重庆西南信息有限公司(原科技部西南信息中心)

计算机科学

CSTPCD北大核心
影响因子:0.944
ISSN:1002-137X
年,卷(期):2024.51(10)