Automatic spectral-temporal modality based EEG sleep staging
MetadataShow full item record
In clinical environments sleep stagings are an important diagnostic tool. Currently, sleep stagings are created manually by technicians as existing feature representations are insufficient for automated classifications. Classifying and segmenting these EEG signals is a non-trivial task due to a combination of high levels of noise, copious artifacts, variations from different recording equipment, and significant inter and intra patient variability. Classical approaches have typically relied on extensive artifact elimination, a mixture of band power from a series of fixed frequency bands, and time domain features. In order to produce a more accurate fully automated sleep staging this work creates a novel Dense Denoised Spectral (DDS) feature representation which exploits time and frequency modality to adaptively denoise single channel EEG recordings. The joint time-frequency structure is composed of both spectral and temporal bands which share similar level of activity. Even under noisy and varied conditions the joint modality in the data can be found through a combination of median operators, thresholding, sparse approximations, and consensus k-means. From the learned time-frequency segmentation either low rank representations can be constructed or the original representation can be denoised using the segments to create a better estimate of the features compared to fixed temporal-spectral windows used in prior work. The 2D time-frequency structure is learned for the DDS features independently for each patient which allows DDS features to adapt to individual differences and significantly increase the total accuracy of a sleep staging.