3260 papers • 126 benchmarks • 313 datasets
Classifying different Retinal degeneration from Optical Coherence Tomography Images (OCT).
(Image credit: Papersgraph)
These leaderboards are used to track progress in retinal-oct-disease-classification-18
Use these libraries to find retinal-oct-disease-classification-18 models and implementations
No subtasks available.
This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth.
This work is exploring ways to scale up networks in ways that aim at utilizing the added computation as efficiently as possible by suitably factorized convolutions and aggressive regularization.
A new mobile architecture, MobileNetV2, is described that improves the state of the art performance of mobile models on multiple tasks and benchmarks as well as across a spectrum of different model sizes and allows decoupling of the input/output domains from the expressiveness of the transformation.
A novel convolution neural network architecture is proposed to successfully distinguish between different degeneration of retinal layers and their underlying causes and predicts retinal diseases in real time while outperforming human diagnosticians.
The use of disease-specific feature representation as a novel architecture comprised of two joint networks - one for supervised encoding of disease model and the other for producing attention maps in an unsupervised manner to retain disease specific spatial information is proposed.
This paper describes a method based on variational autoencoder regularization that improves classification performance when using a limited amount of labeled data and shows superior classification performance compared to a pre-trained and fully fine-tuned baseline ResNet-34.
This model outperforms existing models in predicting the risk of conversion within a time frame from intermediate age-related macular degeneration to the late wet-AMD stage and can avoid heavy augmentations and implicitly incorporates the temporal information in the pairs.
It is found that both MixMatch and FixMatch algorithms outperform the transfer learning baseline on all fractions of labelled data and that exponential moving average of model parameters is not needed for the classification problem, as disabling it leaves the outcome unchanged.
Adding a benchmark result helps the community track progress.