3260 papers • 126 benchmarks • 313 datasets
This task has no description! Would you like to contribute one?
(Image credit: Papersgraph)
These leaderboards are used to track progress in skin-lesion-segmentation-13
Use these libraries to find skin-lesion-segmentation-13 models and implementations
No subtasks available.
A generalized focal loss function based on the Tversky index is proposed to address the issue of data imbalance in medical image segmentation and improves the attention U-Net model by incorporating an image pyramid to preserve contextual features.
This extended abstract describes the participation of RECOD Titans in parts 1 and 3 of the ISIC Challenge 2017 "Skin Lesion Analysis Towards Melanoma Detection" (ISBI 2017).
This paper proposes an extension of U-Net, Bi-directional ConvLSTM U- net with Densely connected convolutions (BCDU-Net), for medical image segmentation, in which the full advantages of U -Net, bi- directional Conv lSTM (BConvL STM) and the mechanism of dense convolutions are taken.
This work makes extensive use of multiple attentions in a CNN architecture and proposes a comprehensive attention-based CNN (CA-Net) for more accurate and explainable medical image segmentation that is aware of the most important spatial positions, channels and scales at the same time.
The presented method is capable of dealing with segmentation problems commonly found in dermoscopic images such as hair removal, oil bubbles, changes in illumination, and reflections images without any additional steps.
A new and automatic semantic segmentation network for robust skin lesion segmentation named Dermoscopic Skin Network (DSNet) is presented and is able to provide better-segmented masks on two different test datasets which can lead to better performance in melanoma detection.
An Artificial Intelligence (AI) framework for supervised skin lesion segmentation employing the deep learning approach, called MFSNet (Multi-Focus Segmentation Network), which outperforms state-of-the-art methods, justifying the reliability of the framework.
Experimental results show that the proposed Diagnosis-First segmentation Framework (DiFF) can effectively calibrate the segmentation uncertainty, and thus significantly facilitate the corresponding disease diagnosis, which outperforms previous state-of-the-art multi-rater learning methods.
This paper addresses the local feature deficiency of the Transformer model by carefully re-designing the self-attention map to produce accurate dense prediction in medical images and proposes a multi-scale context enhancement block within skip connections to adaptively model inter-scale dependencies to overcome the semantic gap among stages of the encoder and decoder modules.
U-Net v2 is introduced, a new robust and efficient U-Net variant for medical image segmentation that aims to augment the infusion of semantic information into low-level features while simultaneously refining high-level features with finer details.
Adding a benchmark result helps the community track progress.