3260 papers • 126 benchmarks • 313 datasets
Affordance recognition from Human-Object Interaction
(Image credit: Papersgraph)
These leaderboards are used to track progress in affordance-recognition-22
Use these libraries to find affordance-recognition-22 models and implementations
No subtasks available.
A deep Visual Compositional Learning (VCL) framework is devised, which is a simple yet efficient framework to effectively address the problem of human-Object interaction detection and largely alleviates the long-tail distribution problem and benefits low-shot or zero-shot HOI detection.
An affordance transfer learning approach is introduced to jointly detect HOIs with novel object and recognize affordances, and is capable of inferring the affordances of novel objects from known affordance representations.
A novel and challenging task for a comprehensive HOI understanding, which is termed as HOI Concept Discovery, and a self-compositional learning framework (or SCL) for HOI concept discovery is introduced, which enables the learning on both known and unknown HOI concepts.
A novel HOI compositional learning framework is devised, termed as Fabricated Compositional Learning (FCL), to address the problem of open long-tailed HOI detection, which introduces an object fabricator to generate effective object representations, and then combines verbs and fabricated objects to compose new HOI samples.
An efficient annotation scheme to address issues of affordance in existing datasets is proposed by combining goal-irrelevant motor actions and grasp types as affordance labels and introducing the concept of mechanical action to represent the action possibilities between two objects.
Adding a benchmark result helps the community track progress.