3260 papers • 126 benchmarks • 313 datasets
Predict contact between object and hand (human or robot).
(Image credit: Papersgraph)
These leaderboards are used to track progress in grasp-contact-prediction-9
Use these libraries to find grasp-contact-prediction-9 models and implementations
No subtasks available.
This work presents ContactDB, a novel dataset of contact maps for household objects that captures the rich hand-object contact that occurs during grasping, enabled by use of a thermal camera.
This work introduces ContactPose, the first dataset of hand-object contact paired with hand pose, object pose, and RGB-D images, and uses this data to rigorously evaluate various data representations, heuristics from the literature, and learning methods for contact modeling.
This work collects a new dataset, called GRAB (GRasping Actions with Bodies), of whole-body grasps, containing full 3D shape and pose sequences of 10 subjects interacting with 51 everyday objects of varying shape and size, and trains GrabNet, a conditional generative network, to predict 3D handgrasps for unseen 3D object shapes.
A novel, object-centric canonical representation at the category level is proposed, which allows establishing dense correspondence across object instances and transferring task-relevant grasps to novel instances.
Adding a benchmark result helps the community track progress.