3260 papers • 126 benchmarks • 313 datasets
( Image credit: 3D fully convolutional networks for subcortical segmentation in MRI: A large-scale study )
(Image credit: Papersgraph)
These leaderboards are used to track progress in brain-segmentation-11
Use these libraries to find brain-segmentation-11 models and implementations
No subtasks available.
This paper introduces three variants of SE modules for image segmentation, and effectively incorporates these SE modules within three different state-of-the-art F-CNNs (DenseNet, SD-Net, U-Net) and observes consistent improvement of performance across all architectures, while minimally effecting model complexity.
QuickNAT, a fully convolutional, densely connected neural network that segments a MRI brain scan in 20 s, is introduced and achieves superior segmentation accuracy and reliability in comparison to state‐of‐the‐art methods, while being orders of magnitude faster.
A fully automatic way to quantify tumor imaging characteristics using deep learning-based segmentation and test whether these characteristics are predictive of tumor genomic subtypes is proposed.
This work unravels the potential of 3D deep learning to advance the recognition performance on volumetric image segmentation and proposes a deep voxelwise residual network, referred as VoxResNet, which borrows the spirit of deep residual learning in 2D image recognition tasks, and is extended into a 3D variant for handlingvolumetric data.
This work is the first to study subcortical structure segmentation on such large‐scale and heterogeneous data and yielded segmentations that are highly consistent with a standard atlas‐based approach, while running in a fraction of the time needed by atlas-based methods and avoiding registration/normalization steps.
HyperDenseNet is proposed, a 3-D fully convolutional neural network that extends the definition of dense connectivity to multi-modal segmentation problems and has total freedom to learn more complex combinations between the modalities, within and in-between all the levels of abstraction, which increases significantly the learning representation.
A deep learning strategy that enables contrast-agnostic semantic segmentation of completely unpreprocessed brain MRI scans, without requiring additional training or fine-tuning for new modalities, and generalizes significantly better across datasets, compared to training using real images.
The Cross-Modality Domain Adaptation (crossMoDA) challenge was organised in conjunction with the 24th International Conference on Medical Image Computing and Computer Ass Intervention (MICCAI 2021) and is the first large and multi-class benchmark for unsupervised cross-modality domain Adaptation.
To the knowledge, this technique is the first to tackle the anatomical segmentation of the whole brain using deep neural networks and it does not require any non-linear registration of the MR images.
Adding a benchmark result helps the community track progress.