3260 papers • 126 benchmarks • 313 datasets
This task has no description! Would you like to contribute one?
(Image credit: Papersgraph)
These leaderboards are used to track progress in liver-segmentation-11
Use these libraries to find liver-segmentation-11 models and implementations
No subtasks available.
A heterogeneous 3D network called Med3D is designed to co-train multi-domain 3DSeg-8 so as to make a series of pre-trained models which can accelerate the training convergence speed of target 3D medical tasks and improve accuracy ranging from 3% to 20%.
This work extent an image-to-image translation method to generate a diverse multitude of realistically looking synthetic images based on images from a simple laparoscopy simulation, and shows that this data set can be used to train models for the task of liver segmentation of laparoscopic images.
The authors' extensive experiments demonstrate that their Models Genesis significantly outperform learning from scratch in all five target 3D applications covering both segmentation and classification, and are attributed to the unified self-supervised learning framework, built on a simple yet powerful observation.
This work trains deep models to learn semantically enriched visual representation by self-discovery, self-classification, and self-restoration of the anatomy underneath medical images, resulting in a semantics-enriched, general-purpose, pre-trained 3D model, named Semantic Genesis.
Evaluation on a large-scale dataset with 280 patients confirmed that the proposed method outperformed previous state-of-the-art methods and significantly reduced the performance degradation for detecting FLLs using misaligned multiphase CT images.
Liver cancer is one of the leading causes of cancer death. To assist doctors in hepatocellular carcinoma diagnosis and treatment planning, an accurate and automatic liver and tumor segmentation method is highly demanded in the clinical practice. Recently, fully convolutional neural networks (FCNs), including 2-D and 3-D FCNs, serve as the backbone in many volumetric image segmentation. However, 2-D convolutions cannot fully leverage the spatial information along the third dimension while 3-D convolutions suffer from high computational cost and GPU memory consumption. To address these issues, we propose a novel hybrid densely connected UNet (H-DenseUNet), which consists of a 2-D DenseUNet for efficiently extracting intra-slice features and a 3-D counterpart for hierarchically aggregating volumetric contexts under the spirit of the auto-context algorithm for liver and tumor segmentation. We formulate the learning process of the H-DenseUNet in an end-to-end manner, where the intra-slice representations and inter-slice features can be jointly optimized through a hybrid feature fusion layer. We extensively evaluated our method on the data set of the MICCAI 2017 Liver Tumor Segmentation Challenge and 3DIRCADb data set. Our method outperformed other state-of-the-arts on the segmentation results of tumors and achieved very competitive performance for liver segmentation even with a single model.
This large-scale study of model performance in the presence of varying types and degrees of error in training data shows that for each architecture, performance steadily declines with boundary-localized errors, however, U-Net was significantly more robust to jagged boundary errors than the other architectures.
This work proposes a method as a part of the LiTS (Liver Tumor Segmentation Challenge) competition for ISBI 17 and MICCAI 17 comparing methods for automatics egmentation of liver lesions in CT scans and achieves very good shape extractions with high detection sensitivity, with competitive scores at time of publication.
A fully automated liver attenuation estimation method termed ALARM is proposed by combining DCNN and morphological operations, which achieved "excellent" agreement with manual estimation for fatty liver detection.
Adding a benchmark result helps the community track progress.