3260 papers • 126 benchmarks • 313 datasets
This task has no description! Would you like to contribute one?
(Image credit: Papersgraph)
These leaderboards are used to track progress in garment-reconstruction-8
Use these libraries to find garment-reconstruction-8 models and implementations
No subtasks available.
Deep Fashion3D is introduced, the largest collection to date of 3D garment models, with the goal of establishing a novel benchmark and dataset for the evaluation of image-based garment reconstruction systems.
This work synthesizes a versatile dataset, named SewFactory, which consists of around 1M images and ground-truth sewing patterns for model training and quantitative evaluation, and proposes a two-level Transformer network called Sewformer, which significantly improves the sewing pattern prediction performance.
SMPLicit is introduced, a novel generative model to jointly represent body pose, shape and clothing geometry that can represent in a unified manner different garment topologies while controlling other properties like the garment size or tightness/looseness.
This paper proposes a layered garment representation on top of SMPL and novelly makes the skinning weight of garment independent of the body mesh, which significantly improves the expression ability of the garment model.
AnchorUDF represents 3D shapes by predicting unsigned distance fields (UDFs) to enable open garment surface modeling at arbitrary resolution and achieves the state-of-the-art performance on single-view garment reconstruction.
A Transformer-based framework for 3D human texture estimation from a single image is proposed, able to effectively exploit the global information of the input image, overcoming the limitations of existing methods that are solely based on convolutional neural networks.
ClothWild is proposed, a 3D clothed human reconstruction framework that firstly addresses the robustness on in-the-wild images, and designs a DensePose-based loss function to reduce ambiguities of the weak supervision.
A principled framework that uses 3D point cloud sequences of dressed humans for garment reconstruction, and introduces a novel Proposal-Guided Hierarchical Feature Network and Iterative Graph Convolution Network, which integrate both high-level semantic features and low-level geometric features for fine details reconstruction.
This paper introduces a novel approach, called REC-MV, to jointly optimize the explicit feature curves and the implicit signed distance field (SDF) of the garments, so that the open garment meshes can be extracted via garment template registration in the canonical space.
An end-to-end differentiable pipeline is proposed that represents garments using implicit surfaces and learns a skinning field conditioned on shape and pose parameters of an articulated body model that allows to recover body and garments parameters jointly from image observations, something that previous work could not do.
Adding a benchmark result helps the community track progress.