3260 papers • 126 benchmarks • 313 datasets
Image Enhancement is basically improving the interpretability or perception of information in images for human viewers and providing ‘better’ input for other automated image processing techniques. The principal objective of Image Enhancement is to modify attributes of an image to make it more suitable for a given task and a specific observer. Source: A Comprehensive Review of Image Enhancement Techniques
(Image credit: Papersgraph)
These leaderboards are used to track progress in image-enhancement-2
Use these libraries to find image-enhancement-2 models and implementations
A novel method, Zero-Reference Deep Curve Estimation (Zero-DCE), which formulates light enhancement as a task of image-specific curve estimation with a deep network and shows that it generalizes well to diverse lighting conditions.
This paper presents a novel architecture with the collective goals of maintaining spatially-precise high-resolution representations through the entire network and receiving strong contextual information from the low- resolution representations, named as MIRNet.
This paper proposes a highly effective unsupervised generative adversarial network, dubbed EnlightenGAN, that can be trained without low/normal-light image pairs, yet proves to generalize very well on various real-world test images.
This work proposes a multi-color space encoder network, which enriches the diversity of feature representations by incorporating the characteristics of different color spaces into a unified structure, and designs a medium transmission-guided decoder network to enhance the response of network towards quality-degraded regions.
It is shown that a variant of the stacked-sparse denoising autoencoder can learn from synthetically darkened and noise-added training examples to adaptively enhance images taken from natural low-light environment and/or are hardware-degraded.
A simple yet principled One-stage Retinex-based Framework (ORF), designed an Illumination-Guided Transformer (IGT) that utilizes illumination representations to direct the modeling of non-local interactions of regions with different lighting conditions, and obtains the algorithm, Retinexformer.
Deep SESR is presented, a residual-in-residual network-based generative model that can learn to restore perceptual image qualities at 2x, 3x, or 4x higher spatial resolution and formulating a multi-modal objective function that addresses the chrominance-specific underwater color degradation, lack of image sharpness, and loss in high-level feature representation.
This work proposed an architecture with three components: ESRGAN, EEN, and Detection network, and used different detector networks in an end-to-end manner where detector loss was backpropagated into the EESRGAN to improve the detection performance.
Uformer, an effective and efficient Transformer-based architecture for image restoration, in which a hierarchical encoder-decoder network is built using the Transformer block and a learnable multi-scale restoration modulator in the form of a multi- scale spatial bias to adjust features in multiple layers of the Uformer decoder is proposed.
A novel method, Zero-Reference Deep Curve Estimation (Zero-DCE), which formulates light enhancement as a task of image-specific curve estimation with a deep network and demonstrates the advantages of the method over state-of-the-art methods qualitatively and quantitatively.
Adding a benchmark result helps the community track progress.