3260 papers • 126 benchmarks • 313 datasets
This task has no description! Would you like to contribute one?
(Image credit: Papersgraph)
These leaderboards are used to track progress in deformable-object-manipulation-9
No benchmarks available.
Use these libraries to find deformable-object-manipulation-9 models and implementations
No subtasks available.
This work considers the task of bed-making, where a robot sequentially grasps and pulls at pick points to increase blanket coverage, and suggests that transfer-invariant robot pick points on fabric can be effectively learned.
This paper proposes an iterative pick-place action space that encodes the conditional relationship between picking and placing on deformable objects and obtains an order of magnitude faster learning compared to independent action-spaces on a suite of deformable object manipulation tasks with visual RGB observations.
SoftGym is presented, a set of open-source simulated benchmarks for manipulating deformable objects, with a standard OpenAI Gym API and a Python interface for creating new environments, to enable reproducible research in this important area.
This work uses a combination of state-of-the-art deep reinforcement learning algorithms to solve the problem of manipulating deformable objects (specifically cloth), and evaluates the approach on three tasks --- folding a towel up to a mark, folding a face towel diagonally, and draping a piece of cloth over a hanger.
This work proposes to simply learn the Policy in the Latent Action Space (PLAS) such that this requirement is naturally satisfied, and demonstrates that this method provides competitive performance consistently across various continuous control tasks and different types of datasets, outperforming existing offline reinforcement learning methods with explicit constraints.
This work proposes a new learning framework that jointly optimizes both the visual representation model and the dynamics model using contrastive estimation and transfers its visual manipulation policies trained on data purely collected in simulation to a real PR2 robot through domain randomization.
This paper proposes to represent the non-rigid transformation with a point-wise combination of several rigid transformations, which makes the solution space well-constrained and enables the method to be solved iteratively with a recurrent framework, which greatly reduces the difficulty of learning.
This work proposes to learn a particle-based dynamics model from a partial point cloud observation to overcome the challenges of partial observability, and shows that the method greatly outperforms previous state-of-the-art model-based and model-free reinforcement learning methods in simulation.
This letter defines a canonical stir-fry movement, and proposes a decoupled framework for learning this deformable object manipulation from human demonstration, and develops a Graph and Transformer based model—Structured-Transformer to capture the spatio-temporal relationship between dual-arm movements.
Adding a benchmark result helps the community track progress.