Multi-model fusion temporal-spatial feature motor imagery electroencephalogram decoding method
Motor imagery electroencephalogram(MI-EEG)has been applied in brain computer interface(BCI)to assist patients with upper and lower limb dysfunction in rehabilitation training.However,the limited decoding performance of MI-EEG and over-reliance on pre-processing are restricting the broad growth of brain computer interface(BCI).We propose a multi-model fusion temporal-spatial feature motor imagery electroencephalogram decoding method(MMFTSF).The MMFTSF uses temporal-spatial convolutional networks to extract shallow features,multi-head probsparse self-attention mechanism to focus on the most valuable features,temporal convolutional networks to extract high-dimensional temporal features,fully connected layer with softmax classifier for classification,and convolutional-based sliding window and spatial information enhancement module to further improve decoding performance from MI-EEG.Experimental results have shown that the proposed reaches 89.03%on public BCI competition IV-2a dataset,which demonstrate MMFTSF has ideal classification performance on MI-EEG.