3260 papers • 126 benchmarks • 313 datasets
Using a single model to restore inputs with different degradation types.
(Image credit: Papersgraph)
These leaderboards are used to track progress in unified-image-restoration-5
Use these libraries to find unified-image-restoration-5 models and implementations
A capable vision-language model and a synthetic degradation pipeline are leveraged to learn image restoration in the wild (wild IR) and a posterior sampling strategy is presented for fast noise-free image generation for various degradations.
A Degradation-Aware Residual-Conditioned Optimal Transport (DA-RCOT) approach that models (all-in-one) image restoration as an optimal transport (OT) problem for unpaired and paired settings, introducing the transport residual as a degradation-specific cue for both the transport cost and the transport map.
Dgradation-aware Visual Prompts are presented, which encode various types of image degradation, e.g., noise and blur, into unified visual prompts, which provide control over image processing and allow weighted combinations for customized image restoration.
This paper presents a degradation-aware vision-language model (DA-CLIP) to better transfer pretrained vision-language models to low-level vision tasks as a multi-task framework for image restoration.
This paper proposes to learn a neural degradation representation (NDR) that captures the underlying characteristics of various degradations that adaptively decomposes different types of degradations, similar to a neural dictionary that represents basic degradation components.
The Composite Refinement Network (CRNet) is proposed, which can perform unified image restoration and enhancement on multiple exposure images using multiple exposure images, and explicitly separates and strengthens high and low-frequency information through pooling layers, using specially designed Multi-Branch Blocks.
This work proposes a universal image restoration framework based on multiple low-rank adapters (LoRA) from multi-domain transfer learning that leverages the pre-trained generative model as the shared component for multi-degradation restoration and transfers it to specific degradation image restoration tasks using low-rank adaptation.
CycleRDM is a novel framework designed to unify restoration and enhancement tasks while achieving high-quality mapping, and can be effectively generalized to a wide range of image restoration and enhancement tasks while requiring only a small number of training samples to be significantly superior on various benchmarks.
This work introduces Contrastive Prompt Learning (CPL), a novel framework that fundamentally enhances prompt-task alignment through two complementary innovations: a Sparse Prompt Module (SPM) that efficiently captures degradation-specific features while minimizing redundancy, and a Contrastive Prompt Regularization (CPR) that explicitly strengthens task boundaries by incorporating negative prompt samples across different degradation types.
Self-Improved Privilege Learning is introduced, a novel paradigm that overcomes limitations by innovatively extending the utility of privileged information beyond training into the inference stage and can be seamlessly integrated into various backbone architectures, offering substantial performance improvements with minimal computational overhead.
Adding a benchmark result helps the community track progress.