A Sleep Staging Model Based on Self-Attention Mechanism and Bi-Directional LSTM
A sleep staging model based on self attention mechanism and bidirectional long short-term memory network is proposed to address the problem that existing models cannot fully capture transient and random waveforms in samples,as well as cannot focus on typical and impor-tant waveforms,which affects staging results.Firstly,a single stream time-frequency information learning module is constructed to automati-cally express the low-level representations of PSG signals,and to mine the time-invariant information and frequency features of EEG data.Then,design an adaptive feature recalibration learning module to calibrate and train the instantaneous and key waveform features that appear in the 30 second sample,giving more attention to these features and assigning greater weights.Finally,the features are sent to the sequence dependency learning module between the associated samples to learn the contextual relationships between each sleep sample,fully utilizing ad-jacent samples before and after to determine the current sample category.The results show that this method performs better than other main-stream models,with accuracy rates of 85.5%and 84.3%on the Sleep-edf-2013 and Sleep-edf-2018 public sleep datasets,and MF1 values of 82.1%and 79.6%,respectively,which can provide technical reference for sleep staging tasks.