3260 papers • 126 benchmarks • 313 datasets
This task has no description! Would you like to contribute one?
(Image credit: Papersgraph)
These leaderboards are used to track progress in cardiac-segmentation-3
No benchmarks available.
Use these libraries to find cardiac-segmentation-3 models and implementations
No subtasks available.
It is argued that Transformers can serve as strong encoders for medical image segmentation tasks, with the combination of U-Net to enhance finer details by recovering localized spatial information.
Under the direct down-sampling and up-sampled of the inputs and outputs by 4x, experiments demonstrate that the pure Transformer-based U-shaped Encoder-Decoder network outperforms those methods with full Convolution or the combination of transformer and convolution.
This work proposes to tackle the problem of automated left and right ventricle segmentation through the application of a deep fully convolutional neural network architecture for pixel-wise labeling in cardiac magnetic resonance imaging.
A novel self-supervised FSS framework for medical images in order to eliminate the requirement for annotations during training, and superpixel-based pseudo-labels are generated to provide supervision.
This work proposes a graph architecture that uses two convolutional rings based on cardiac anatomy that eliminates anatomical incorrect multi-structure segmentations on the publicly available CAMUS dataset and shows this predictor can detect out-of-distribution and unsuitable input images in real-time.
This paper proposes the PnPAdaNet (plug-and-play adversarial domain adaptation network) for adapting segmentation networks between different modalities of medical images, e.g., MRI and CT, and introduces a novel benchmark on the cardiac dataset for the task of unsupervised cross-modality domain adaptation.
Magnetic resonance (MR) protocols rely on several sequences to assess pathology and organ status properly. Despite advances in image analysis, we tend to treat each sequence, here termed modality, in isolation. Taking advantage of the common information shared between modalities (an organ’s anatomy) is beneficial for multi-modality processing and learning. However, we must overcome inherent anatomical misregistrations and disparities in signal intensity across the modalities to obtain this benefit. We present a method that offers improved segmentation accuracy of the modality of interest (over a single input model), by learning to leverage information present in other modalities, even if few (semi-supervised) or no (unsupervised) annotations are available for this specific modality. Core to our method is learning a disentangled decomposition into anatomical and imaging factors. Shared anatomical factors from the different inputs are jointly processed and fused to extract more accurate segmentation masks. Image misregistrations are corrected with a Spatial Transformer Network, which non-linearly aligns the anatomical factors. The imaging factor captures signal intensity characteristics across different modality data and is used for image reconstruction, enabling semi-supervised learning. Temporal and slice pairing between inputs are learned dynamically. We demonstrate applications in Late Gadolinium Enhanced (LGE) and Blood Oxygenation Level Dependent (BOLD) cardiac segmentation, as well as in T2 abdominal segmentation. Code is available at https://github.com/vios-s/multimodal_segmentation.
A method based on deep learning to perform cardiac segmentation on short axis Magnetic resonance imaging stacks iteratively from the top slice to the bottom slice iteratively using a novel variant of the U-net.
This work presents a novel learning framework to monitor the performance of heart segmentation models in the absence of ground truth, formulated as an anomaly detection problem, that allows deriving surrogate quality measures for a segmentation and allows flagging suspicious results.
Experimental results show that the joint learning of both tasks is complementary and the proposed models outperform the competing methods significantly in terms of accuracy and speed.
Adding a benchmark result helps the community track progress.