3260 papers • 126 benchmarks • 313 datasets
Inverse Rendering is the task of recovering the properties of a scene, such as shape, material, and lighting, from an image or a video. The goal of inverse rendering is to determine the properties of a scene given an observation of it, and to generate new images or videos based on these properties.
(Image credit: Papersgraph)
These leaderboards are used to track progress in inverse-rendering
Use these libraries to find inverse-rendering models and implementations
No subtasks available.
This work develops an approximate differentiable renderer for a compact, interpretable representation, which it is shown that the method is the only one comparable to classic techniques for pose estimation, and performs well in shape from silhouette.
This work outputs triangle meshes with spatially-varying materials and environment lighting that can be deployed in any traditional graphics engine unmodified, and introduces a differentiable formulation of the split sum approximation of environment lighting to efficiently recover all-frequency lighting.
SfSNet produces significantly better quantitative and qualitative results than state-of-the-art methods for inverse rendering and independent normal and illumination estimation and is designed to reflect a physical lambertian rendering model.
This work introduces a general-purpose differentiable ray tracer, which is the first comprehensive solution that is able to compute derivatives of scalar functions over a rendered image with respect to arbitrary scene parameters such as camera pose, scene geometry, materials, and lighting parameters.
A deep inverse rendering framework for indoor scenes, which combines novel methods to map complex materials to existing indoor scene datasets and a new physically-based GPU renderer to create a large-scale, photorealistic indoor dataset.
Conventional physically-based methods for relighting portrait images need to solve an inverse rendering problem, estimating face geometry, reflectance and lighting. However, the inaccurate estimation of face components can cause strong artifacts in relighting, leading to unsatisfactory results. In this work, we apply a physically-based portrait relighting method to generate a large scale, high quality, “in the wild” portrait relighting dataset (DPR). A deep Convolutional Neural Network (CNN) is then trained using this dataset to generate a relit portrait image by using a source image and a target lighting as input. The training procedure regularizes the generated results, removing the artifacts caused by physically-based relighting methods. A GAN loss is further applied to improve the quality of the relit portrait image. Our trained network can relight portrait images with resolutions as high as 1024 × 1024. We evaluate the proposed method on the proposed DPR datset, Flickr portrait dataset and Multi-PIE dataset both qualitatively and quantitatively. Our experiments demonstrate that the proposed method achieves state-of-the-art results. Please refer to https://zhhoper.github.io/dpr.html for dataset and code.
Applications of DSS to inverse rendering for geometry synthesis and denoising, where large scale topological changes, as well as small scale detail modifications, are accurately and robustly handled without requiring explicit connectivity, outperforming state-of-the-art techniques are demonstrated.
This work shows how to train a fully convolutional neural network to perform inverse rendering from a single, uncontrolled image, and believes this is the first attempt to use MVS supervision for learning inverse rendering.
A straightforward method for generating dense pseudo ground truth using the model’s predictions and multi-illumination data, enabling generalization to in-the-wild imagery and the real-world applicability of the estimations by performing otherwise difficult editing tasks such as recoloring and relighting is demonstrated.
RenderNet is presented, a differentiable rendering convolutional network with a novel projection unit that can render 2D images from 3D shapes with high performance and can be used in inverse rendering tasks to estimate shape, pose, lighting and texture from a single image.
Adding a benchmark result helps the community track progress.