3260 papers • 126 benchmarks • 313 datasets
This task has no description! Would you like to contribute one?
(Image credit: Papersgraph)
These leaderboards are used to track progress in automatic-sleep-stage-classification-4
Use these libraries to find automatic-sleep-stage-classification-4 models and implementations
No subtasks available.
It is highlighted that state-of-the-art automated sleep staging outperforms human scorers performance for healthy volunteers and patients suffering from obstructive sleep apnea.
This paper proposes a joint classification-and-prediction framework based on convolutional neural networks (CNNs) for automatic sleep staging, and introduces a simple yet efficient CNN architecture to power the framework.
Identifying bio-signals based-sleep stages requires time-consuming and tedious labor of skilled clinicians. Deep learning approaches have been introduced in order to challenge the automatic sleep stage classification conundrum. However, the difficulties can be posed in replacing the clinicians with the automatic system due to the differences in many aspects found in individual bio-signals, causing the inconsistency in the performance of the model on every incoming individual. Thus, we aim to explore the feasibility of using a novel approach, capable of assisting the clinicians and lessening the workload. We propose the transfer learning framework, entitled MetaSleepLearner, based on Model Agnostic Meta-Learning (MAML), in order to transfer the acquired sleep staging knowledge from a large dataset to new individual subjects (source code is available at https://github.com/IoBT-VISTEC/MetaSleepLearner). The framework was demonstrated to require the labelling of only a few sleep epochs by the clinicians and allow the remainder to be handled by the system. Layer-wise Relevance Propagation (LRP) was also applied to understand the learning course of our approach. In all acquired datasets, in comparison to the conventional approach, MetaSleepLearner achieved a range of 5.4% to 17.7% improvement with statistical difference in the mean of both approaches. The illustration of the model interpretation after the adaptation to each subject also confirmed that the performance was directed towards reasonable learning. MetaSleepLearner outperformed the conventional approaches as a result from the fine-tuning using the recordings of both healthy subjects and patients. This is the first work that investigated a non-conventional pre-training method, MAML, resulting in a possibility for human-machine collaboration in sleep stage classification and easing the burden of the clinicians in labelling the sleep stages through only several epochs rather than an entire recording.
A deep transfer learning approach to overcome data-variability and data-inefficiency issues and enable transferring knowledge from a large dataset to a small cohort for automatic sleep staging would enable one to improve the quality of automaticsleep staging models when the amount of data is relatively small.
A novel deep graph neural network, named GraphSleepNet, is proposed, to adaptively learn the intrinsic connection among different electroencephalogram (EEG) channels, represented by an adjacency matrix, thereby best serving the spatial-temporal graph convolution network (ST-GCN) for sleep stage classification.
Automatic sleep stage scoring systems based on deep learning algorithms should consider as much data as possible from as many sources available to ensure proper generalization.
RobustSleepNet is introduced, a deep learning model for automatic sleep stage classification able to handle arbitrary PSG montages and unlocks the possibility to perform high-quality out-of-the-box automatic sleep staging with any clinical setup.
A novel attention-based deep learning architecture called AttnSleep is proposed to classify sleep stages using single channel EEG signals to outperforms state-of-the-art techniques in terms of different evaluation metrics.
A novel adversarial learning framework called ADAST is proposed to tackle the domain shift problem in the unlabeled target domain and designs an iterative self-training strategy to improve the classification performance on the target domain via target domain pseudo labels.
An unsupervised Time-Series representation learning framework via Temporal and Contextual Contrasting (TS-TCC), to learn time-series representation from unlabeled data with high efficiency in few-labeled data and transfer learning scenarios is proposed.
Adding a benchmark result helps the community track progress.