3260 papers • 126 benchmarks • 313 datasets
Color Constancy is the ability of the human vision system to perceive the colors of the objects in the scene largely invariant to the color of the light source. The task of computational Color Constancy is to estimate the scene illumination and then perform the chromatic adaptation in order to remove the influence of the illumination color on the colors of the objects in the scene. Source: CroP: Color Constancy Benchmark Dataset Generator
(Image credit: Papersgraph)
These leaderboards are used to track progress in color-constancy-8
Use these libraries to find color-constancy-8 models and implementations
A novel method, Zero-Reference Deep Curve Estimation (Zero-DCE), which formulates light enhancement as a task of image-specific curve estimation with a deep network and shows that it generalizes well to diverse lighting conditions.
A new temporal CC benchmark is introduced that comprises of 600 real-world sequences recorded with a high-resolution mobile phone camera, a fixed train-test split which ensures consistent evaluation, and a baseline method which achieves high accuracy in the new benchmark and the dataset used in previous works.
Fast Fourier Color Constancy (FFCC), a color constancy algorithm which solves illuminant estimation by reducing it to a spatial localization task on a torus, enables better training techniques, an effective temporal smoothing technique, and richer methods for error analysis.
This work trains a convolutional neural network model that learns to estimate the illuminant color and gamma correction parameters based on the semantic information of the given image in order to remove color casts.
Experiments on two real-world benchmarks show that the proposed approach outperforms state-of-the-art methods in the camera-agnostic scenario and in the setting where the camera is known, MSGP outperforms all statistical methods.
A novel design methodology for architecting a light-weight and faster DNN architecture for vision applications that disassembles the contextual features and color properties from an image, and later combines them to predict a global property (e.g. Global Illumination).
A novel illuminant color estimation framework is proposed for computational color constancy, which incorporates the high representational capacity of deep-learning-based models and the great interpretability of assumption- based models.
Adding a benchmark result helps the community track progress.