3260 papers • 126 benchmarks • 313 datasets
This task has no description! Would you like to contribute one?
(Image credit: Papersgraph)
These leaderboards are used to track progress in robot-task-planning-10
Use these libraries to find robot-task-planning-10 models and implementations
No subtasks available.
It is shown that a mild relaxation of the task and workspace constraints implicit in existing object grasping datasets can cause neural network based grasping algorithms to fail on even a simple block stacking task when executed under more realistic circumstances.
This is the first paper that reconciles visual-inertial SLAM and dense human mesh tracking and can have a profound impact on planning and decision-making, human-robot interaction, long-term autonomy, and scene prediction.
This work proposes to provide real-world grounding by means of pretrained skills, which are used to constrain the model to propose natural language actions that are both feasible and contextually appropriate, and shows how low-level skills can be combined with large language models so that the language model provides high-level knowledge about the procedures for performing complex and temporally extended instructions.
A neural network architecture and associated planning algorithm that learns a representation of the world that can generate prospective futures, uses this generative model to simulate the result of sequences of high-level actions in a variety of environments, and evaluates these actions via a variant of Monte Carlo Tree Search to find a viable solution to a particular problem.
This work proposes a novel, category-level manipulation framework that leverages an object-centric, categories-level representation and model-free 6 DoF motion tracking and allows to teach different manipulation strategies by solely providing a single demonstration, without complicated manual programming.
PackIt is presented, a virtual environment to evaluate and potentially learn the ability to do geometric planning, where an agent needs to take a sequence of actions to pack a set of objects into a box with limited space.
In reality, there is still much to be done for robots to be able to perform manipulation actions with full autonomy. Complicated manipulation tasks, such as cooking, may still require a person to perform some actions that are very risky for a robot to perform. On the other hand, some other actions may be very risky for a human with physical disabilities to perform. Therefore, it is necessary to balance the workload of a robot and a human based on their limitations while minimizing the effort needed from a human in a collaborative robot (cobot) set-up. This paper proposes a new version of our functional object-oriented network (FOON) that integrates weights in its functional units to reflect a robot’s chance of successfully executing an action of that functional unit. The paper also presents a task planning algorithm for the weighted FOON to allocate manipulation action load to the robot and human to achieve optimal performance while minimizing human effort. Through a number of experiments, this paper shows several successful cases in which using the proposed weighted FOON and the task planning algorithm allow a robot and a human to successfully complete complicated tasks together with higher success rates than a robot doing them alone.
The Attention-driven Robotic Manipulation (ARM) algorithm is presented, which is a general manipulation algorithm that can be applied to a range of sparse-rewarded tasks, given only a small number of demonstrations.
A novel, object-centric canonical representation at the category level is proposed, which allows establishing dense correspondence across object instances and transferring task-relevant grasps to novel instances.
This paper investigates the possibility of grounding high-level tasks, expressed in natural language, to a chosen set of actionable steps and proposes a procedure that conditions on existing demonstrations and semantically translates the plans to admissible actions.
Adding a benchmark result helps the community track progress.