Depression Detection Based on Contextual Knowledge Enhanced Transformer Network
Depression,as a prevalent mental health problem,substantially impacts individual's daily lives and well-being.Addressing the limitations of current depression detection,such as subjectivity and manual intervention,automatic detection methods based on deep learning have become a popular research direction.The primary challenge in the most accessible text modality is modelling the long-range and sequence dependencies in depressive texts.To address this problem,this paper proposes a contextual knowledge-based enhanced Transformer network model,named Robustly optimized Bidirectional Encoder Representations from Transformers approach-Bidirectional Long Short-Term Memory(RoBERTa-BiLSTM),to comprehensively extract and utilize contextual features from depressive text sequences.By combining the strengths of sequence models and Transformer architectures,the proposed model captures contextual interactions between words to provide a reference for depression category prediction and information characterization.First,the RoBERTa model is employed to embed vocabulary into a semantic vector space,and then,a BiLSTM network effectively captures long-range contextual semantics.Finally,empirical research is conducted on two large-scale datasets,DAIC-WOZ and EATD-Corpus.Experimental results demonstrate that the model achieves an accuracy exceeding 0.74 and 0.93,and a recall exceeding 0.66 and 0.56,respectively,enabling accurate depression detection.
depression detectionsequence modeldeep learningTransformer modelBi-directional Long Short-Term Memory(BiLSTM)model