3260 papers • 126 benchmarks • 313 datasets
Low-Light Image Enhancement is a computer vision task that involves improving the quality of images captured under low-light conditions. The goal of low-light image enhancement is to make images brighter, clearer, and more visually appealing, without introducing too much noise or distortion.
(Image credit: Papersgraph)
These leaderboards are used to track progress in low-light-image-enhancement-7
Use these libraries to find low-light-image-enhancement-7 models and implementations
No subtasks available.
The approach, named SSD, discretizes the output space of bounding boxes into a set of default boxes over different aspect ratios and scales per feature map location, which makes SSD easy to train and straightforward to integrate into systems that require a detection component.
A novel method, Zero-Reference Deep Curve Estimation (Zero-DCE), which formulates light enhancement as a task of image-specific curve estimation with a deep network and shows that it generalizes well to diverse lighting conditions.
A practical system which is able to predict pixel-wise class labels with a measure of model uncertainty, and shows that modelling uncertainty improves segmentation performance by 2-3% across a number of state of the art architectures such as SegNet, FCN and Dilation Network, with no additional parametrisation.
This paper proposes a highly effective unsupervised generative adversarial network, dubbed EnlightenGAN, that can be trained without low/normal-light image pairs, yet proves to generalize very well on various real-world test images.
It is shown that a variant of the stacked-sparse denoising autoencoder can learn from synthetically darkened and noise-added training examples to adaptively enhance images taken from natural low-light environment and/or are hardware-degraded.
A simple yet principled One-stage Retinex-based Framework (ORF), designed an Illumination-Guided Transformer (IGT) that utilizes illumination representations to direct the modeling of non-local interactions of regions with different lighting conditions, and obtains the algorithm, Retinexformer.
This work builds a simple yet effective network for Kindling the Darkness (denoted as KinD), which, inspired by Retinex theory, decomposes images into two components that are robust against severe visual defects, and user-friendly to arbitrarily adjust light levels.
Extensive experiments demonstrate that the proposed deep Retinex-Net learned on this LOw-Light dataset not only achieves visually pleasing quality for low-light enhancement but also provides a good representation of image decomposition.
This paper proposes a novel end-to-end attention-guided method based on multi-branch convolutional neural network that can produce high fidelity enhancement results for low-light images and outperforms the current state-of-the-art methods both quantitatively and visually.
This work turns a single unlabeled test sample into a self-supervised learning problem, on which the model parameters are updated before making a prediction, which leads to improvements on diverse image classification benchmarks aimed at evaluating robustness to distribution shifts.
Adding a benchmark result helps the community track progress.