Black-box Adversarial Attack Methods on Modulation Recognition Neural Networks Based on Signal Proximal Linear Combination
With the extensive application of deep learning in the field of wireless communication,especially in signal modulation recognition,the vulnerability of neural networks to adversarial example attacks poses challenges to the security of wireless com-munication.Addressing the black-box attack scenario in wireless signals,where real-time feedback from the neural network is hard to obtain and only recognition results can be accessed,a black-box query adversarial attack method based on proximal linear combination is proposed.Initially,on a subset of the dataset,each original signal undergoes a proximal linear combination with target signals,where they are linearly combined within a range very close to the original signal(with weighting coefficients no greater than 0.05)and then input into the neural network to query.By counting the number of misrecognitions by the network for all proximal linear combinations,specific target signals most susceptible to linear combination effects for each original signal category are determined,which is termed the optimal perturbation signals.During attack testing,adversarial examples are genera-ted by executing proximal linear combinations using the optimal perturbation signal corresponding to the signal category.Experi-mental results demonstrate that using the optimal perturbation signal for each modulation category on the chosen subset,the re-cognition accuracy of the neural network dropped from 94%to 50%when applied to the entire dataset,with a lower perturbation power compared to adding random noise attacks.Furthermore,the generated adversarial examples exhibit some transferability to structurally similar neural networks.This method,which generates new adversarial examples after statistical queries,is easy to implement and eliminates the need for further black-box queries.
Deep learningAdversarial examplesSignal recognitionBlack-box attackAdversarial signal