3260 papers • 126 benchmarks • 313 datasets
Medical object detection is the task of identifying medical-based objects within an image. ( Image credit: Liver Lesion Detection from Weakly-labeled Multi-phase CT Volumes with a Grouped Single Shot MultiBox Detector )
(Image credit: Papersgraph)
These leaderboards are used to track progress in object-detection
Use these libraries to find object-detection models and implementations
No subtasks available.
This work presents a framework to automatically detect and localize tumors as small as 100 x 100 pixels in gigapixel microscopy images sized 100,000 x100,000 pixels and achieves image-level AUC scores above 97% on both the Camelyon16 test set and an independent set of 110 slides.
Retina U-Net is proposed, a simple architecture, which naturally fuses the Retina Net one-stage detector with the U- net architecture widely used for semantic segmentation in medical images and yields strong detection performance only reached by its more complex two-staged counterparts.
A multitask universal lesion analysis network (MULAN) for joint detection, tagging, and segmentation of lesions in a variety of body parts, which greatly extends existing work of single-task lesions analysis on specific body parts.
3D context enhanced region-based CNN (3DCE) is proposed to incorporate 3D context information efficiently by aggregating feature maps of 2D images to detect lesions from computed tomography scans.
Evaluation on a large-scale dataset with 280 patients confirmed that the proposed method outperformed previous state-of-the-art methods and significantly reduced the performance degradation for detecting FLLs using misaligned multiphase CT images.
This diagnostic study describes a novel attention-based deep neural network framework for classifying microscopy images to identify Barrett esophagus and esophageal adenocarcinoma.
With the proposed model design, the data-hunger problem is relieved as the learning task is made easier with the correctly induced clinical practice prior, and promising results are shown on the NIH DeepLesion dataset.
Results show that the one-stage object detection model is a practical solution, which runs in near real-time and can learn an unbiased feature representation from a large-volume real-world detection dataset, which requires less tedious and time consuming construction of the weak phase-level bounding box labels.
A comprehensive comparison with various state-of-the-art methods reveals the importance of benchmarking the deep learning methods for automated real-time polyp identification and delineations that can potentially transform current clinical practices and minimise miss-detection rates.
This work proposes a highly accurate and efficient one-stage lesion detector, by re-designing a RetinaNet to meet the particular challenges in medical imaging, and optimize the anchor configurations using a differential evolution search algorithm.
Adding a benchmark result helps the community track progress.