3260 papers • 126 benchmarks • 313 datasets
Grading the severity of diabetic retinopathy from (ophthalmic) fundus images
(Image credit: Papersgraph)
These leaderboards are used to track progress in diabetic-retinopathy-grading-10
Use these libraries to find diabetic-retinopathy-grading-10 models and implementations
No subtasks available.
This work proposes a self-supervised framework, namely lesion-based contrastive learning for automated diabetic retinopathy (DR) grading, whereby lesion patches are employed to encourage the feature extractor to learn representations that are highly discriminative for DR grading.
This work demonstrates that the DR grading framework is sensitive to input resolution, objective function, and composition of data augmentation, and achieves a state-of-the-art result on the EyePACS test set with only image-level labels.
This work leveraged the Optimal Transport (OT) theory to propose an unpaired image-to-image translation scheme for mapping low-quality retinal CFPs to high-quality counterparts, and validated the integrated framework, OTRE, on three publicly available retinal image datasets.
It is found that an optimized MobileNet, through selective modifications, can surpass ViT-based models in various RD benchmarks, including diabetic retinopathy grading, detection of multiple fundus diseases, and classification of diabetic macular edema.
A new deep learning architecture, called BiRA-Net, is proposed, which combines the attention model for feature extraction and bilinear model for fine-grained classification, and a new loss function, called grading loss, which leads to improved training convergence of the proposed approach.
The challenges of reproducing deep learning method results are shown, and the need for more replication and reproduction studies to validate deep learning methods, especially for medical image analysis is shown.
This article presents a robust framework, which collaboratively utilizes patch-level and image-level annotations, for DR severity grading, and proves that the algorithm is robust when facing image quality and distribution variations that commonly exist in real-world practice.
This paper proposes a straightforward approach to enforce this constraint for the task of predicting Diabetic Retinopathy (DR) severity from eye fundus images based on the well-known notion of Cost-Sensitive classification, and expands standard classification losses with an extra term that acts as a regularizer.
This work proposes an inherently interpretable CNN for regression using similarity-based comparisons (INSightR-Net) and demonstrates its methods on the task of diabetic retinopathy grading and quantified the quality of the explanations using sparsity and diversity.
Saliency-guided Self-Supervised image Transformer (SSiT) is proposed for Diabetic Retinopathy (DR) grading from fundus images, and significantly outperforms other representative state-of-the-art SSL methods on all downstream datasets and under various evaluation settings.
Adding a benchmark result helps the community track progress.