1
Learning Sensorimotor Primitives of Sequential Manipulation Tasks from Visual Demonstrations
2
Neural Descriptor Fields: SE(3)-Equivariant Object Representations for Manipulation
3
CaTGrasp: Learning Category-Level Task-Relevant Grasping in Clutter from Simulation
4
BundleTrack: 6D Pose Tracking for Novel Objects without Instance or Category-Level 3D Models
5
Vision-driven Compliant Manipulation for Reliable, High-Precision Assembly Tasks
6
Data-driven 6D Pose Tracking by Calibrating Image Residuals in Synthetic Domains
7
Coarse-to-Fine Imitation Learning: Robot Manipulation from a Single Demonstration
8
Learning Multi-Object Dense Descriptor for Autonomous Goal-Conditioned Grasping
9
Benchmarking Off-The-Shelf Solutions to Robotic Assembly Tasks
10
Robotic Pick-and-Place With Uncertain Object Instance Segmentation and Shape Completion
11
kPAM 2.0: Feedback Control for Category-Level Robotic Manipulation
12
Learning by Watching: Physical Imitation of Manipulation Skills from Human Videos
13
Intermittent Visual Servoing: Efficiently Learning Policies Robust to Instrument Changes for High-precision Surgical Manipulation
14
S3K: Self-Supervised Semantic Keypoints for Robotic Manipulation via Multi-View Consistency
15
Keypoints into the Future: Self-Supervised Correspondence in Model-Based Reinforcement Learning
16
se(3)-TrackNet: Data-driven 6D Pose Tracking by Calibrating Image Residuals in Synthetic Domains
17
Learning to Generalize Across Long-Horizon Tasks from Human Demonstrations
18
SQUIRL: Robust and Efficient Learning from Video Demonstration of Long-Horizon Robotic Manipulation Tasks
19
PyTorch: An Imperative Style, High-Performance Deep Learning Library
20
RandLA-Net: Efficient Semantic Segmentation of Large-Scale Point Clouds
21
Visual Geometric Skill Inference by Watching Human Demonstration
22
Multi-step Pick-and-Place Tasks Using Object-centric Dense Correspondences
23
KETO: Learning Keypoint Representations for Tool Manipulation
24
6-PACK: Category-level 6D Pose Tracker with Anchor-Based Keypoints
25
Scene-level Pose Estimation for Multiple Instances of Densely Packed Objects
26
Self-Supervised Correspondence in Visuomotor Policy Learning
27
Graph-Structured Visual Imitation
28
PoseRBPF: A Rao–Blackwellized Particle Filter for 6-D Object Pose Tracking
29
kPAM: KeyPoint Affordances for Category-Level Robotic Manipulation
30
Normalized Object Coordinate Space for Category-Level 6D Object Pose and Size Estimation
31
Visual Foresight: Model-Based Deep Reinforcement Learning for Vision-Based Robotic Control
32
Learning dexterous in-hand manipulation
33
Neural Task Graphs: Generalizing to Unseen Tasks From a Single Video Demonstration
34
Dense Object Nets: Learning Dense Visual Object Descriptors By and For Robotic Manipulation
35
SE3-Pose-Nets: Structured Deep Dynamics Models for Visuomotor Control
36
Synthetically Trained Neural Networks for Learning Human-Readable Plans from Real-World Demonstrations
37
Robot learning of industrial assembly task via human demonstrations
38
An Algorithmic Perspective on Imitation Learning
39
One-Shot Visual Imitation Learning via Meta-Learning
40
Grasp Pose Detection in Point Clouds
41
Learning a visuomotor controller for real world robotic grasping using simulated depth images
42
Dex-Net 2.0: Deep Learning to Plan Robust Grasps with Synthetic Point Clouds and Analytic Grasp Metrics
43
One-Shot Imitation Learning
45
Domain randomization for transferring deep neural networks from simulation to the real world
46
Real-Time Perception Meets Reactive Motion Generation
47
Combining self-supervised learning and imitation for vision-based rope manipulation
48
Third-Person Imitation Learning
49
PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation
50
DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs
51
Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection
52
End-to-End Training of Deep Visuomotor Policies
53
Probabilistic model-based imitation learning
54
Sampling-based algorithms for optimal motion planning
55
Statistical Learning by Imitation of Competing Constraints in Joint Space and Task Space
56
Dynamic Imitation in a Humanoid Robot through Nonparametric Probabilistic Inference
57
Learning Movement Primitives
58
A Density-Based Algorithm for Discovering Clusters in Large Spatial Databases with Noise
59
Shape Modeling with Front Propagation: A Level Set Approach
60
Least-Squares Estimation of Transformation Parameters Between Two Point Patterns
62
Intermittent Visual Servoing: Effciently Learning Policies Robust to Instrument Changes for High- precision Surgical Manipulation
63
Community, Blender - a 3D modelling and rendering package, Blender Foundation
64
Learning from Demonstrations Through the Use of Non-rigid Registration
65
Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography