3260 papers • 126 benchmarks • 313 datasets
Image relighting involves changing the illumination settings of an image.
(Image credit: Papersgraph)
These leaderboards are used to track progress in image-relighting-1
Use these libraries to find image-relighting-1 models and implementations
No subtasks available.
A Scale-recurrent Network (SRN-DeblurNet) is proposed and shown to produce better quality results than state-of-the-arts, both quantitatively and qualitatively in single image deblurring.
This paper reviews the NTIRE 2021 depth guided image relighting challenge and relies on the VIDIT dataset for each of the two challenge tracks, including depth information, to transform the illumination settings of an input image to match those of another guide image, similar to style transfer.
This work presents a novel dataset, the Virtual Image Dataset for Illumination Transfer (VIDIT), in an effort to create a reference evaluation benchmark and to push forward the development of illumination manipulation methods.
The novel VIDIT dataset used in the AIM 2020 challenge and the different proposed solutions and final evaluation results over the 3 challenge tracks are presented.
A deep learningbased method called multi-modal bifurcated network (MBNet) for depth guided image relighting, where given an image and the corresponding depth maps, a new image with the given illuminant angle and color temperature is generated by this model.
This work outputs triangle meshes with spatially-varying materials and environment lighting that can be deployed in any traditional graphics engine unmodified, and introduces a differentiable formulation of the split sum approximation of environment lighting to efficiently recover all-frequency lighting.
This work applies a scheduling algorithm to quantum supremacy circuits in order to reduce the required communication and simulate a 45-qubit circuit on the Cori II supercomputer using 8, 192 nodes and 0.5 petabytes of memory, which constitutes the largest quantum circuit simulation to this date.
This work proposes an approach for local relighting that trains a model without supervision of any novel image dataset by using synthetically generated image pairs from another model, including a stylespace-manipulated GAN.
An end-to-end trainable Convolutional Neural Network for single image dehazing, named GridDehazeNet, which implements a novel attention-based multi-scale estimation on a grid network, and an explanation as to why it is not necessarily beneficial to take advantage of the dimension reduction offered by the atmosphere scattering model.
The image relighting task of transferring illumination conditions between two images offers an interesting and difficult challenge with potential applications in photography, cinematography and computer graphics and methods are presented to achieve that goal.
Adding a benchmark result helps the community track progress.