3260 papers • 126 benchmarks • 313 datasets
Blind All-in-One Image Restoration aims to remove various degradations from an input image without prior knowledge of the degradation type or severity. In this task, we include 5 of the most common image restoration tasks with five degradations: rain, haze, noise, blur, and low-light conditions. This task focuses on five common image restoration tasks, each addressing a specific degradation: rain , haze, noise, blur, and low-light conditions. For training, we utilize the following datasets: Rain200L for deraining, RESIDE for dehazing, WED and BSD400 for denoising with a noise level of σ=25, GoPro for deblurring, and LoLv1 for low-light enhancement. For evaluation, we employ: Rain100L for deraining, SOTS (outdoor) for dehazing, BSD68 for denoising with σ=25, GoPro for deblurring, and LoLv1 for low-light enhancement. The performance of the models is assessed by reporting the average PSNR across all five evaluation datasets, reflecting the overall capability of the model to handle diverse degradations.
(Image credit: Papersgraph)
These leaderboards are used to track progress in 5-degradation-blind-all-in-one-image-restoration-10
Use these libraries to find 5-degradation-blind-all-in-one-image-restoration-10 models and implementations
No datasets available.
No subtasks available.
A Degradation-Aware Residual-Conditioned Optimal Transport (DA-RCOT) approach that models (all-in-one) image restoration as an optimal transport (OT) problem for unpaired and paired settings, introducing the transport residual as a degradation-specific cue for both the transport cost and the transport map.
In this paper, we study a challenging problem in image restoration, namely, how to develop an all-in-one method that could recover images from a variety of unknown corruption types and levels. To this end, we propose an All-in-one Image Restoration Network (AirNet) consisting of two neural modules, named Contrastive-Based Degraded Encoder (CBDE) and Degradation-Guided Restoration Network (DGRN). The major advantages of AirNet are two-fold. First, it is an all-in-one solution which could recover various degraded images in one network. Second, AirNet is free from the prior of the corruption types and levels, which just uses the observed corrupted image to perform inference. These two advantages enable AirNet to enjoy better flexibility and higher economy in real world scenarios wherein the priors on the corruptions are hard to know and the degradation will change with space and time. Extensive experimental results show the proposed method outperforms 17 image restoration baselines on four challenging datasets. The code is available at https://github.com/XLearning-SCU/2022-CVPR-AirNet.
This work examines the sub-latent space of each input, identifying key components and reweighting them in a gated manner to enable both efficient and comprehensive restoration through a joint embedding mechanism without scaling up the model or relying on large multimodal models.
Learning to leverage the relationship among diverse image restoration tasks is quite beneficial for unraveling the intrinsicingredients behind the degradation. Recent years have witnessed the flourish of various All-in-one methods, which handle multiple image degradations within a single model. In practice, however, few attempts have been made to excavate task correlations in that exploring the underlying fundamentalingredients of various image degradations, resulting in poor scalability as more tasks are involved. In this paper, we propose a novel perspective to delve into the degradation via aningredients-oriented rather than previous task-oriented manner for scalable learning. Specifically, our method, named Ingredients-oriented Degradation Reformulation framework (IDR), consists of two stages, namely task-oriented knowledge collection and ingredients-oriented knowledge integration. In the first stage, we conduct ad hoc operations on different degradations according to the underlying physics principles, and establish the corresponding prior hubs for each type of degradation. While the second stage progressively reformulates the preceding task-oriented hubs into single ingredients-oriented hub via learnable Principal Component Analysis (PCA), and employs a dynamic routing mechanism for probabilistic unknown degradation removal. Extensive experiments on various image restoration tasks demonstrate the effectiveness and scalability of our method. More importantly, our IDR exhibits the favorable generalization ability to unknown downstream tasks.
ABAIR is introduced, a simple yet effective adaptive blind all-in-one restoration model that not only handles multiple degradations and generalizes well to unseen distortions but also efficiently integrates new degradations by training only a small subset of parameters.
This work proposes HAIR, a Hypernetworks-based All-in-One Image Restoration plug-and-play method that generates parameters based on the input image and thus makes the model to adapt to specific degradation dynamically, and proposes Res-HAIR, which integrates HAIR into the well-known Restormer.
Adding a benchmark result helps the community track progress.