Convolutional Transformer EEG Emotion Recognition Model Based on Multi-domain Information Fusion
Current emotion recognition methods for eletroencephalogram(EEG)signals seldom fuse spatial,temporal and frequency information,and most methods can only extract local EEG features,resulting in limitations in global information correlation.The article proposes an EEG emotion recognition method based on 3D-CNN-Transformer mechanism(3D-CTM)model with multi-domain information fusion.The method first designs a 3D feature structure based on the characteristics of EEG signals,simultaneously fusing the spatial,temporal,and frequency information of EEG signals.Then a convolutional neural network module is used to learn the deep features for multi-domain information fusion,and then the Transformer self-attention module is connected to extract the global correlations within the feature information.Finally,the global average pooling is used to integrate the feature information for classification.Experimental results show that the 3D-CTM model achieves an average accuracy of 96.36% in the SEED dataset for triple classification and 87.44% in the SEED-Ⅳ dataset for quadruple classification,which effectively improves the emotion recognition accuracy.