3260 papers • 126 benchmarks • 313 datasets
Intrinsic Image Decomposition is the process of separating an image into its formation components such as reflectance (albedo) and shading (illumination). Reflectance is the color of the object, invariant to camera viewpoint and illumination conditions, whereas shading, dependent on camera viewpoint and object geometry, consists of different illumination effects, such as shadows, shading and inter-reflections. Using intrinsic images, instead of the original images, can be beneficial for many computer vision algorithms. For instance, for shape-from-shading algorithms, the shading images contain important visual cues to recover geometry, while for segmentation and detection algorithms, reflectance images can be beneficial as they are independent of confounding illumination effects. Furthermore, intrinsic images are used in a wide range of computational photography applications, such as material recoloring, relighting, retexturing and stylization. Source: CNN based Learning using Reflection and Retinex Models for Intrinsic Image Decomposition
(Image credit: Papersgraph)
These leaderboards are used to track progress in intrinsic-image-decomposition-1
No benchmarks available.
Use these libraries to find intrinsic-image-decomposition-1 models and implementations
No subtasks available.
This paper proposes a novel unsupervised intrinsic image decomposition framework, which relies on neither labeled training data nor hand-crafted priors, and directly learns the latent feature of reflectance and shading from unsuper supervised and uncorrelated data.
A straightforward method for generating dense pseudo ground truth using the model’s predictions and multi-illumination data, enabling generalization to in-the-wild imagery and the real-world applicability of the estimations by performing otherwise difficult editing tasks such as recoloring and relighting is demonstrated.
This approach is shown to surpass state-of-the-art methods both on single-image depth estimation and on intrinsic image decomposition.
This paper explores a different approach to learning intrinsic images: observing image sequences over time depicting the same scene under changing illumination, and learning single-view decompositions that are consistent with these changes.
A supervised end-to-end CNN architecture to jointly learn intrinsic image decomposition and semantic segmentation is proposed and the gains of addressing those two problems jointly are analyzed.
An efficient approach based on a deep recurrent network for enforcing temporal consistency in a video that can handle multiple and unseen tasks, including but not limited to artistic style transfer, enhancement, colorization, image-to-image translation and intrinsic image decomposition.
A physically constrained learning-based method that directly estimates document reflectance based on intrinsic image formation which generalizes to challenging illumination conditions, and a new dataset that clearly improves previous synthetic ones, by adding a large range of realistic shading and diverse multi-illuminant conditions.
A model which enriches neural networks with physical insight is proposed which can outperform many state-of-the-art methods in terms of wellknown fidelity metrics and perceptual loss.
This paper shows how to perform scene-level inverse rendering to recover shape, reflectance and lighting from a single, uncontrolled image using a fully convolutional neural network and believes this is the first attempt to use MVS supervision for learning inverse rendering.
A novel intrinsic image transfer (IIT) algorithm for image illumination manipulation, which creates a local image translation between two illumination surfaces, built on an optimization-based framework composed of illumination, reflectance and content photo-realistic losses.
Adding a benchmark result helps the community track progress.