Research on Wearable Emotion Recognition Based on Multi-Source Domain Adversarial Transfer Learning
Emotions can profoundly impact both human's overall well-being and cognitive function.As a result,they are of paramount significance in the realm of human life especially in modern society with increasing pressures.Automatic emotion recognition contributes to early warning of psychological disorders and the exploration of behavioral mechanisms,holding immense research and practical value.Over the past decade,researchers have proposed various kinds of methods for automatic emotion recognition based on different sensing mechanisms.Nevertheless,each of them exhibits deficiencies in different aspects.For example,the methods based on electroencephalogram(EEG)signals require the use of specialized,costly,and challenging-to-operate EEG devices;the methods relying on visual and speech cues carry privacy risks;and the methods based on the analysis of mobile phone usage pattern need improvement in terms of reliability and accuracy.Considering the above,this paper proposes a novel approach to automatic emotion recognition that utilizes low-cost,readily available,and easy-to-use wearable hardware.In a detail,this paper makes use of the potential correlations between physiological signals,namely,breathing and heartbeat sounds,and pulse with human emotions.By employing data fusion across multiple sensing modalities,this work effectively harnesses diverse information types,reducing data redundancy,and substantially improving the system performance at the same time.Furthermore,while ensuring a high recognition accuracy,this paper also proposes an emotion recognition model based on a multi-source domain adversarial approach which aims to enhance the generalization of emotion recognition across diverse users and minimize the cost for unseen users.Our method first leverages a small amount of unlabeled data from unseen users to achieve quick model adaptation in an unsupervised approach,and then fine-tune the classifier's parameters with a minimal amount of labeled data to further improve emotion recognition accuracy.To validate the effectiveness of our proposed emotion recognition method,this paper designs and implements a wearable system that integrate two microphones and photoplethysmography(PPG)sensors to measure physiological signs.Among them,the two microphones are equipped in a smartglasses and earphone to collect sounds produced by heartbeats and breathing,respectively;the two PPG sensors are embedded in the smartglasses and a smartwatch to measure the blood pulses in the head and wrist,respectively.Based on this wearable system,we have conducted extensive experiments in diverse settings with thirty participants aged from 17 to 30 years old.We have also carried an assessment of the impact of different environmental factors such as noise,hardware,and wearing positions to evaluate the robustness of our emotion recognition system.The experimental results demonstrate that for the four basic emotions,the proposed method achieves an average recognition accuracy of 95.0%in the subject-dependent cases,and an average accuracy of 62.5%in the cross-subject cases after using multi-source domain adversarial transfer learning,with a 5.3%improvement over the baseline methods.When combined with supervised fine-tuning with few shots,the recognition accuracy further increases to 81.1%,surpassing the baseline methods by 12.4%.These findings affirm the feasibility of the proposed method and offer a fresh perspective for ubiquitous emotion recognition research.