3260 papers • 126 benchmarks • 313 datasets
Blind Image Deblurring is a classical problem in image processing and computer vision, which aims to recover a latent image from a blurred input. Source: Learning a Discriminative Prior for Blind Image Deblurring
(Image credit: Papersgraph)
These leaderboards are used to track progress in blind-image-deblurring-9
No benchmarks available.
Use these libraries to find blind-image-deblurring-9 models and implementations
No datasets available.
No subtasks available.
DAUs provide a seamless substitution of convolutional filters in existing state-of-the-art architectures, which are demonstrated on AlexNet, ResNet50, ResNet101, DeepLab and SRN-DeblurNet and SRN-DeblurNet.
A recurrent gradient descent network (RGDN) is proposed by systematically incorporating deep neural networks into a fully parameterized gradient descent scheme and learns an implicit image prior and a universal update rule through recursive supervision.
This work shows that current state-of-the-art kernel estimation methods based on the ℓ0 gradient prior can be adapted to handle high noise levels while keeping their efficiency, and shows that a fast non-blind deconvolution method can be significantly improved by first denoising the blurry image.
A modification of the proposed scheme that governs the deblurring process under both generative and classical priors is presented, to improve the performance on rich image datasets not well learned by the generative networks.
This work proposes to precondition the Richardson solver using approximate inverse filters of the (known) blur and natural image prior kernels to allow extremely efficient parameter sharing across the image, and leads to significant gains in accuracy and/or speed compared to classical FFT and conjugate-gradient methods.
Blind image deblurring is a long standing challenging problem in image processing and low-level vision. Recently, sophisticated priors such as dark channel prior, extreme channel prior, and local maximum gradient prior, have shown promising effectiveness. However, these methods are computationally expensive. Meanwhile, since these priors involved subproblems cannot be solved explicitly, approximate solution is commonly used, which limits the best exploitation of their capability. To address these problems, this work firstly proposes a simplified sparsity prior of local minimal pixels, namely patch-wise minimal pixels (PMP). The PMP of clear images is much more sparse than that of blurred ones, and hence is very effective in discriminating between clear and blurred images. Then, a novel algorithm is designed to efficiently exploit the sparsity of PMP in deblurring. The new algorithm flexibly imposes sparsity inducing on the PMP under the maximum a posterior (MAP) framework rather than directly uses the half quadratic splitting algorithm. By this, it avoids non-rigorous approximation solution in existing algorithms, while being much more computationally efficient. Extensive experiments demonstrate that the proposed algorithm can achieve better practical stability compared with state-of-the-arts. In terms of deblurring quality, robustness and computational efficiency, the new algorithm is superior to state-of-the-arts. Code for reproducing the results of the new method is available at https://github.com/FWen/deblur-pmp.git.
This paper introduces a method to encode the blur operators of an arbitrary dataset of sharp-blur image pairs into a blur kernel space that can handle unseen blur kernel, while avoiding using complicated handcrafted priors on the blur operator often found in classical methods.
This paper introduces a method to encode the blur operators of an arbitrary dataset of sharp-blur image pairs into a blur kernel space that can handle unseen blur kernel, while avoiding using complicated handcrafted priors on the blur operator often found in classical methods.
Adding a benchmark result helps the community track progress.