3260 papers • 126 benchmarks • 313 datasets
This task has no description! Would you like to contribute one?
(Image credit: Papersgraph)
These leaderboards are used to track progress in 3d-human-reconstruction-2
Use these libraries to find 3d-human-reconstruction-2 models and implementations
No subtasks available.
This work formulates a multi-level architecture that is end-to-end trainable and significantly outperforms existing state-of-the-art techniques on single image human shape reconstruction by fully leveraging 1k-resolution input images.
A model-free 3D human mesh estimation framework, named DecoMR, is proposed, which explicitly establishes the dense correspondence between the mesh and the local image features in the UV space (i.e., IUV image), and a novel UV map that maintains most of the neighboring relations on the original mesh surface.
This work presents the first method capable of photorealistically reconstructing deformable scenes using photos/videos captured casually from mobile phones and shows that it faithfully reconstructs non-rigidly deforming scenes and reproduces unseen views with high fidelity.
A Pyramidal Mesh Alignment Feedback (PyMAF) loop to leverage a feature pyramid and rectify the predicted parameters explicitly based on the mesh-image alignment status in the authors' deep regressor is proposed.
The proposed ICON (“Tmplicit Clothed humans Obtained from Normals”) enables avatar creation directly from video with personalized pose-dependent cloth deformation and outperforms the state of the art in reconstruction.
SmoothNet models the natural smoothness characteristics in body movements by learning the long-range temporal relations of every joint without considering the noisy correlations among joints, and improves the temporal smoothness of existing pose estimators significantly and enhances the estimation accuracy of those challenging frames as a side-effect.
This work presents a novel human body model formulated by an extensive set of anthropocentric measurements, which is capable of generating a wide range of human body shapes and poses and is the first of its kind to have been trained end-to-end using only synthetically generated data.
This work presents a novel joint 3D human-object reconstruction method (CONTHO) that effectively exploits contact information between humans and objects and proposes a novel contact-based refinement Transformer that effectively aggregates human features and object features based on the estimated human-object contact.
DeepHuman, an image-guided volume-to-volume translation CNN for 3D human reconstruction from a single RGB image, leverages a dense semantic representation generated from SMPL model as an additional input to reduce the ambiguities associated with the reconstruction of invisible areas.
This work proposes a learning based motion capture model that optimizes neural network weights that predict 3D shape and skeleton configurations given a monocular RGB video and shows that the proposed model improves with experience and converges to low-error solutions where previous optimization methods fail.
Adding a benchmark result helps the community track progress.