3260 papers • 126 benchmarks • 313 datasets
Medical image generation is the task of synthesising new medical images. ( Image credit: Towards Adversarial Retinal Image Synthesis )
(Image credit: Papersgraph)
These leaderboards are used to track progress in medical-image-generation-7
Use these libraries to find medical-image-generation-7 models and implementations
An open-source platform is implemented based on TensorFlow APIs for deep learning in medical imaging domain that facilitates warm starts with established pre-trained networks, adapting existing neural network architectures to new problems, and rapid prototyping of new solutions.
This paper proposes the PnPAdaNet (plug-and-play adversarial domain adaptation network) for adapting segmentation networks between different modalities of medical images, e.g., MRI and CT, and introduces a novel benchmark on the cardiac dataset for the task of unsupervised cross-modality domain adaptation.
It is shown that the implemented GAN models can synthesize visually realistic MR images (incorrectly labeled as real by a human) and it is also shown that models producing more visually realistic synthetic images not necessarily have better quantitative error measurements, when compared to ground truth data.
To the best of the knowledge, the results are the first to show visually-appealing synthetic images that comprise clinically-meaningful information on automated skin cancer classification.
A novel artifact disentanglement network that disentangles the metal artifacts from CT images in the latent space is introduced that achieves comparable performance to existing supervised models for MAR and demonstrates better generalization ability over the supervised models.
A novel, two-stage pipeline for generating synthetic medical images from a pair of generative adversarial networks, tested in practice on retinal fundi images and develops a hierarchical generation process to divide the complex image generation task into two parts: geometry and photorealism.
This work proposes a method that learns to synthesize eye fundus images directly from data, by means of a vessel segmentation technique that uses a recent image-to-image translation technique, based on the idea of adversarial learning.
Interestingly, it is shown that the proposed two-stage framework for automatic classification of skin lesion images using adversarial training and transfer learning toward melanoma detection leads to context based lesion assessment that can reach an expert dermatologist level.
Adding a benchmark result helps the community track progress.