This work investigates what choices matter for learning such general vision-based agents in simulation, and what affects optimal transfer to the real robot, and then leverages data collected by such policies and improves upon them with offline RL.
We study the problem of robotic stacking with objects of complex geometry. We propose a challenging and diverse set of such objects that was carefully designed to require strategies beyond a simple"pick-and-place"solution. Our method is a reinforcement learning (RL) approach combined with vision-based interactive policy distillation and simulation-to-reality transfer. Our learned policies can efficiently handle multiple object combinations in the real world and exhibit a large variety of stacking skills. In a large experimental study, we investigate what choices matter for learning such general vision-based agents in simulation, and what affects optimal transfer to the real robot. We then leverage data collected by such policies and improve upon them with offline RL. A video and a blog post of our work are provided as supplementary material.
A. Raju
2 papers
Rae Jeong
2 papers
Nimrod Gileadi
2 papers
Alex X. Lee
3 papers
Coline Devin
2 papers
Yuxiang Zhou
1 papers
Thomas Lampe
1 papers
Konstantinos Bousmalis
2 papers
Jost Tobias Springenberg
5 papers
Arunkumar Byravan
1 papers
D. Khosid
1 papers
C. Fantacci
1 papers
José Enrique Chen
1 papers
M. Neunert
2 papers
Antoine Laurens
1 papers
Stefano Saliceti
1 papers
Federico Casarini
1 papers
F. Nori
1 papers