EEG-fNIRS emotion recognition based on multi-brain attention mechanism capsule fusion network
The multi-brain attention mechanism and capsule fusion module based on CapsNet (MBA-CF-cCapsNet) was proposed in order to improve the accuracy of emotion recognition. EEG-fNIRS signals were evoked by emotional video clips to construct TYUT3.0 dataset,and the features of EEG and fNIRS were extracted and mapped to the matrix. The features of EEG and fNIRS were fused by the multi-brain region attention mechanism,and different weights were given to the features of different brain regions in order to extract higher quality primary capsules. The capsule fusion module was used to reduce the number of capsules entering the dynamic routing mechanism and reduce the running time of the model. The MBA-CF-cCapsNet model was used to conduct experiment on the TYUT3.0 dataset. The accuracy of emotion recognition combined with the two signals increased by 1.53% and 14.35% compared with the results of single-modal EEG and fNIRS. The average recognition rate of the MBA-CF-cCapsNet model increased by 4.98% compared with the original CapsNet model,and was improved by 1%-5% compared with the current commonly used CapsNet emotion recognition model.