3260 papers • 126 benchmarks • 313 datasets
( Image credit: Motion Planning Among Dynamic, Decision-Making Agents with Deep Reinforcement Learning )
(Image credit: Papersgraph)
These leaderboards are used to track progress in motion-planning-7
Use these libraries to find motion-planning-7 models and implementations
No subtasks available.
Complex-YOLO, a state of the art real-time 3D object detection network on point clouds only, is introduced and a specific Euler-Region-Proposal Network (E-RPN) is proposed to estimate the pose of the object by adding an imaginary and a real fraction to the regression network.
The Deep Planning Network (PlaNet) is proposed, a purely model-based agent that learns the environment dynamics from images and chooses actions through fast online planning in latent space using a latent dynamics model with both deterministic and stochastic transition components.
This work extends the previous approach to develop an algorithm that learns collision avoidance among a variety of types of dynamic agents without assuming they follow any particular behavior rules, and introduces a strategy using LSTM that enables the algorithm to use observations of an arbitrary number of other agents, instead of previous methods that have a fixed observation size.
STRIPStream is introduced: an extension of the STRIPS language which can model these domains by supporting the specification of blackbox generators to handle complex constraints by reducing STRIPStream problems to a sequence of finite-domain planning problems.
This work provides domain-independent algorithms that reduce PDDLStream problems to a sequence of finite PDDL problems and introduces an algorithm that dynamically balances exploring new candidate plans and exploiting existing ones to solve tightly-constrained problems.
This letter presents a new deep learning-based framework for robust nonlinear estimation and control using the concept of a Neural Contraction Metric, and demonstrates how to exploit NCMs to design an online optimal estimator and controller for nonlinear systems with bounded disturbances utilizing their duality.
This collection was collected by a fleet of 20 autonomous vehicles along a fixed route in Palo Alto, California over a four-month period and forms the largest, most complete and detailed dataset to date for the development of self-driving, machine learning tasks such as motion forecasting, planning and simulation.
This paper proposes EagerMOT, a simple tracking formulation that eagerly integrates all available object observations from both sensor modalities to obtain a well-informed interpretation of the scene dynamics and achieves state-of-the-art results across several MOT tasks on the KITTI and NuScenes datasets.
This work proposes a new efficient and complete solver under general constraints for monotone instances, which can be solved by moving each object at most once, which results in 57.3% faster computation and 3 times higher success rate than state-of-the-art methods.
Adding a benchmark result helps the community track progress.