PoisonEEG:A New Way to EEG Backdoor Attack Based on Frequency Transformation
With the remarkable success of deep learning models,EEG-based brain-computer interfaces ( BCIs) have gained widespread application.However,deep learning models are well-known to be vulnerable to backdoor attacks.While backdoor at-tacks have achieved significant success in image and natural language domains,designing a covert and sophisticated attack against EEG data remains challenging due to the complexity,instability,and imbalanced distribution of EEG data.The existing backdoor attacks are limited by the requirement of participating in the model training phase,without which the attacks fail to maintain a high level of concealment.To address these limitations,a novel backdoor attack method called PoisonEEG is proposed,which allows at-tackers to manipulate the classification of EEG data into a target class without participating in the model training stage.Specifically,PoisonEEG involves three phases:first,selecting a sample as a trigger for the target class;second,using rein-forcement learning to learn optimal masks for the injection electrodes and frequency bands of the trigger;third,performing linear interpolation on the spectral features of the poisoned dataset and the trigger based on the learned masks.Experiments were conduc-ted on two EEG tasks—emotion recognition and motor imagery.The results demonstrate that the PoisonEEG attack is effective,covert,and robust,successfully manipulating models in complex EEG data environments.
based brain-computer interface (BCI )electroencephalogrambackdoor attackreinforcement learningfrequency transformation