Multidimensional EEG emotion recognition using SKM and Transformer
Although significant progress has been made in the current research on EEG emotion recognition based on deep learning,there are still many challenges,the most prominent of which are the lack of effective analysis ability and insufficient feature extraction of differentiated information of EEG signals in different dimensions.In order to solve the above problems,this paper proposes a dual-channel parallel neural network architecture with multi-dimensional input signals.Firstly,the multi-channel EEG signals are converted into a series of time-frequency features and reconstructed into a multi-dimensional feature matrix to capture the multi-dimensional information in the EEG signals more comprehensively.Subsequently,these feature matrices are fed into two independent processing paths in parallel to achieve parallel and efficient computation.In the first path of the dual-path parallel network,SKM(an optimized SK-MiniXception network)is introduced,which not only retains the powerful ability of the traditional convolutional structure in feature extraction,but also cleverly incorporates the attention mechanism.The design enables the model to focus on those EEG signal channels that contribute more significantly to emotional expression at each stage of model training,and improve the accuracy of emotion recognition by assigning higher weights to these channels.Meanwhile,the BAGRU-BLS module is proposed,which makes full use of the advantages of the bidirectional gated recurrent unit in processing time series data,and uses the attention mechanism to strengthen the weight of time period features with strong emotional relationship expression,and optimizes the local time feature extraction process by combining with the width learning module.This module can not only effectively capture the dynamic change characteristics of four-dimensional EEG signals,but also reduce the risk of overfitting the model during the training process.In the second path of the model,in order to further mine the global time information of the two-dimensional EEG signal,the two-dimensional EEG signal feature matrix processed by the one-dimensional convolutional layer is put into the Transformer network.Transformer network is known for its powerful global context modeling ability,which can extract more comprehensive and coherent time-frequency information across long-distance dependencies,so as to avoid the problem of time information loss that may occur in the process of dimension transformation of feature matrix.Finally,in order to integrate the information from different paths,an adaptive feature fusion module is designed.The module can intelligently evaluate the importance of each feature,and reduce the redundant information output of the model through nonlinear combination,so that the final sentiment classification and depression detection results are more accurate and reliable.Our experimental results show that the proposed dual-channel parallel network dilivers excellent performance on multiple datasets.In the sentiment classification experiment of DEAP public dataset,the model achieves an accuracy of 96.13%in the four-classification task of valence-efficiency-wake-up dimension.In the detection experiment of major depression on the MODMA dataset,the accuracy rate reaches 97.51%was obtained.These results are significantly better than the traditional convolutional neural network and recurrent neural network models,which fully verify the effectiveness of the proposed model in the field of EEG emotion recognition.