Combine Multimodal Data and Hybrid Models for Emotion Recognition
The precise and reliable identification of human emotions represents a challenging yet profoundly meaningful undertaking.However,it is difficult to comprehensively describe emotions with a single modal signal due to their intricate nature,and there is still room to improve the accuracy of emotion recognition based on physiological signals.Therefore,this research paper introduces a novel hybrid model for multimodal emotion recognition,denoted as FCAN-FFM-LightGBM.This model comprises two key components:FCAN-FFM,serving as a feature processor,and LightGBM,functioning as a classifier.Emotion recognition is conducted utilizing elec-troencephalogram(EEG),electrooculogram(EOG),and electromyogram(EMG)signals.Through extensive experimental evaluation on the DEAP public dataset,notable accuracies of 95.92%,97.22%,and 97.16%were achieved in four-class classification,arousal,and validity dimension experiments,respectively.These outcomes demonstrate the efficacy of multimodal in enhancing emotion recog-nition accuracy,surpassing the performance of unimodal approaches.Furthermore,compared with other methods,the method in this pa-per reduces computational consumption while achieving higher accuracy in emotion classification.