3260 papers • 126 benchmarks • 313 datasets
Human Sleep Staging into W-R-N or W-R-L-D classes from multiple or single polysomnography signals
(Image credit: Papersgraph)
These leaderboards are used to track progress in sleep-stage-detection
No benchmarks available.
Use these libraries to find sleep-stage-detection models and implementations
U-Time is a temporal fully convolutional network based on the U-Net architecture that was originally proposed for image segmentation developed for the analysis of sleep data and reaches or outperforms current state-of-the-art deep learning models while being much more robust in the training process and without requiring architecture or hyperparameter adaptation across tasks.
This paper proposes a hierarchical recurrent neural network named SeqSleepNet that outperforms the state-of-the-art approaches, achieving an overall accuracy, macro F1-score, and Cohen’s kappa of 87.1%, 83.3%, and 0.815 on a publicly available dataset with 200 subjects.
This work demonstrates an end-to-end on-smartphone pipeline that can infer sleep stages in just single 30-second epochs, with an overall accuracy of 83.5% on 20-fold cross validation for five-class classification of sleep stages using the open Sleep-EDF dataset.
It is highlighted that state-of-the-art automated sleep staging outperforms human scorers performance for healthy volunteers and patients suffering from obstructive sleep apnea.
The results suggest that self-supervision may pave the way to a wider use of deep learning models on EEG data, and linear classifiers trained on SSL-learned features consistently outperformed purely supervised deep neural networks in low-labeled data regimes while reaching competitive performance when all labels were available.
This paper proposes a joint classification-and-prediction framework based on convolutional neural networks (CNNs) for automatic sleep staging, and introduces a simple yet efficient CNN architecture to power the framework.
A 34-layer deep residual ConvNet architecture for end-to-end sleep staging is proposed, which takes raw single channel electroencephalogram signal as input and yields hypnogram annotations for each 30s segments as output.
A deep transfer learning approach to overcome data-variability and data-inefficiency issues and enable transferring knowledge from a large dataset to a small cohort for automatic sleep staging would enable one to improve the quality of automaticsleep staging models when the amount of data is relatively small.
This study validates a tractable, fully-automated, and sensitive pipeline for RBD identification that could be translated to wearable take-home technology and demonstrates that incorporating sleep architecture and sleep stage transitions can benefit RBD detection.
Adding a benchmark result helps the community track progress.