3260 papers • 126 benchmarks • 313 datasets
This task has no description! Would you like to contribute one?
(Image credit: Papersgraph)
These leaderboards are used to track progress in single-view-3d-reconstruction-3
Use these libraries to find single-view-3d-reconstruction-3 models and implementations
By replacing conventional decoders by the implicit decoder for representation learning and shape generation, this work demonstrates superior results for tasks such as generative shape modeling, interpolation, and single-view 3D reconstruction, particularly in terms of visual quality.
PQ-NET is introduced, a deep neural network which represents and generates 3D shapes via sequential part assembly which encodes a sequence of part features into a latent vector of fixed size and reconstructs the 3D shape, one part at a time, resulting in a sequential assembly.
DISN, a Deep Implicit Surface Network which can generate a high-quality detail-rich 3D mesh from an 2D image by predicting the underlying signed distance fields by combining global and local features, achieves the state-of-the-art single-view reconstruction performance.
This work presents a geometry-based end-to-end deep learning framework that first detects the mirror plane of reflection symmetry that commonly exists in man-made objects and then predicts depth maps by finding the intra-image pixel-wise correspondence of the symmetry.
This work proposes a truly differentiable rendering framework that is able to directly render colorized mesh using differentiable functions and back-propagate efficient supervision signals to mesh vertices and their attributes from various forms of image representations, including silhouette, shading and color images.
3D-LMNet, a latent embedding matching approach for 3D reconstruction, is proposed, which outperform state-of-the-art approaches on the task of single-view3D reconstruction on both real and synthetic datasets while generating multiple plausible reconstructions, demonstrating the generalizability and utility of the approach.
A unified framework is presented that can combine both types of supervision: a small amount of camera pose annotations are used to enforce pose-invariance and view-point consistency, and unlabeled images combined with an adversarial loss are use to enforce the realism of rendered, generated models.
This paper imposes domain confusion between natural and synthetic image representations to reduce the distribution gap, and imposes the reconstruction to be `realistic' by forcing it to lie on a (learned) manifold of realistic object shapes.
A differentiable rendering framework which allows gradients to be analytically computed for all pixels in an image and to view foreground rasterization as a weighted interpolation of local properties and background rasterized as a distance-based aggregation of global geometry.
This work proposes novel hyperparameter-free losses for single view 3D reconstruction with morphable models (3DMM) and proposes a novel implicit regularization technique based on random virtual projections that does not require additional 2D or 3D annotations.
Adding a benchmark result helps the community track progress.