Classification of group speech imagined EEG signals based on attention mechanism and deep learning
A classification method based on convolutional block attention module (CBAM) and Inception-V4 convolutional neural network was proposed to improve the classification accuracy of group EEG signals of imagined speech.CBAM was used to emphasize significant localized areas and extract distinctive features from the output feature map of convolutional neural network (CNN),so as to improve the classification performance of group EEG signals of imagined speech.The group EEG signals of imagined speech were converted into time-frequency images by short-time Fourier transform,then the images were used to train the Inception-V4 network incorporating with CBAM.Experiments on an open-accessed dataset showed that the proposed method achieved an accuracy of 52.2%in classifying six types of short words,which was 4.1 percentage points higher than that with Inception-V4 and was 5.9 percentage points higher than that with VGG-16.Furthermore,the training time can be reduced greatly with transfer learning.