3260 papers • 126 benchmarks • 313 datasets
Image: Zimmerman et l
(Image credit: Papersgraph)
These leaderboards are used to track progress in 3d-hand-pose-estimation-23
Use these libraries to find 3d-hand-pose-estimation-23 models and implementations
This work introduces an adversary trained to tell whether human body shape and pose parameters are real or not using a large database of 3D human meshes, and produces a richer and more useful mesh representation that is parameterized by shape and 3D joint angles.
A deep network is proposed that learns a network-implicit 3D articulation prior that yields good estimates of the 3D pose from regular RGB images, and a large scale 3D hand pose dataset based on synthetic hand models is introduced.
This model is designed as a 3D CNN that provides accurate estimates while running in real-time and outperforms previous methods in almost all publicly available 3D hand and human pose estimation datasets and placed first in the HANDS 2017 frame-based3D hand pose estimation challenge.
With simple improvements: adding ResNet layers, data augmentation, and better initial hand localization, DeepPrior achieves better or similar performance than more sophisticated recent methods on the three main benchmarks (NYU, ICVL, MSRA) while keeping the simplicity of the original method.
This work presents an end-to-end learnable model that exploits a novel contact loss that favors phys- ically plausible hand-object constellations, and improves grasp quality metrics over baselines, using RGB images as input.
This work presents the first end-to-end deep learning based method that predicts both 3D hand shape and pose from RGB images in the wild, consisting of the concatenation of a deep convolutional encoder, and a fixed model-based decoder.
Qualitative experiments show that the HAMR framework is capable of recovering appealing 3D hand mesh even in the presence of severe occlusions, and outperforms the state-of-the-art methods for both 2D and3D hand pose estimation from a monocular RGB image on several benchmark datasets.
This work proposes a Graph Convolutional Neural Network (Graph CNN) based method to reconstruct a full 3D mesh of hand surface that contains richer information of both 3D hand shape and pose and proposes a weakly-supervised approach by leveraging the depth map as a weak supervision in training.
This work introduces a simple and effective network architecture for monocular 3D hand pose estimation consisting of an image encoder followed by a mesh convolutional decoder that is trained through a direct3D hand mesh reconstruction loss.
This paper addresses the problem of 3D human pose and shape estimation from a single image by proposing a graph-based mesh regression, which outperform the comparable baselines relying on model parameter regression, and achieves state-of-the-art results among model-based pose estimation approaches.
Adding a benchmark result helps the community track progress.