Affective Computing for E-Learning Based on Multimodal Data Fusion
Due to its intelligent and personalization,online learning has increasingly become a favored mainstream learning method.However,the existence of the'affective gap'severely hampers the development of online teaching activities.It is imperative to research how to instantaneously and accurately perceive emotional cues in learning to pro-vide guidance for improving learning performance.This paper constructs a multimodal data fusion model for emotional computation in online learning.Facial expressions,voice,and text data of subjects are collected,and emotional recogni-tion models are employed to obtain emotional recognition results for each modality.Based on decision-level fusion,multimodal emotional computation in online learning is achieved,determining the optimal emotional computation model.The study reveals that the average recognition accuracy based on the optimal emotional computation model has increased by 14.51%compared to single-modal emotional recognition.This confirms the feasibility and effectiveness of the model in emotional computation within online learning scenarios.