3260 papers • 126 benchmarks • 313 datasets
Image: Weng et al
(Image credit: Papersgraph)
These leaderboards are used to track progress in 3d-multi-object-tracking-10
Use these libraries to find 3d-multi-object-tracking-10 models and implementations
No subtasks available.
This paper presents the on-line tracking method, which made the first place in the NuScenes Tracking Challenge, and outperforms the AB3DMOT baseline method by a large margin in the Average Multi-Object Tracking Accuracy (AMOTA) metric.
The framework, CenterPoint, first detects centers of objects using a keypoint detector and regresses to other attributes, including 3D size, 3D orientation, and velocity, and refines these estimates using additional point features on the object.
This paper proposes EagerMOT, a simple tracking formulation that eagerly integrates all available object observations from both sensor modalities to obtain a well-informed interpretation of the scene dynamics and achieves state-of-the-art results across several MOT tasks on the KITTI and NuScenes datasets.
This paper summarizes current 3D MOT methods into a unified framework by decomposing them into four constituent parts: pre-processing of detection, association, motion model, and life cycle management, and proposes corresponding improvements which lead to a strong yet simple baseline: SimpleTrack.
SimTrack is presented to simplify the hand-crafted tracking paradigm by proposing an end-to-end trainable model for joint detection and tracking from raw point clouds and results reveal that the simple approach compares favorably with the state-of-the-art methods while ruling out the heuristic matching rules.
SRT3D is developed, a sparse region-based approach to 3D object tracking that improves on the current state of the art both in terms of runtime and quality, performing particularly well for noisy and cluttered images encountered in the real world.
This paper proposes Sparse R-CNN 3D (SRCN3D), a novel two-stage fully-sparse detector that incorporates sparse queries, sparse attention with box-wise sampling, and sparse prediction, leading to a fully-convolutional and deployment-friendly pipeline.
This paper proposes a novel 3D multi-object cooperative tracking algorithm for autonomous driving via a differentiable multi-sensor Kalman Filter that learns to estimate measurement uncertainty for each detection that can better utilize the theoretical property of Kalman Filter-based tracking methods.
A data-driven approach to online multi-object tracking that uses a convolutional neural network for data association in a tracking-by-detection framework that learns to perform global assignments in 3D purely from data, handles noisy detections and varying number of targets, and is easy to train.
3D multi-object tracking (MOT) is an essential component for many applications such as autonomous driving and assistive robotics. Recent work on 3D MOT focuses on developing accurate systems giving less attention to practical considerations such as computational cost and system complexity. In contrast, this work proposes a simple real-time 3D MOT system. Our system first obtains 3D detections from a LiDAR point cloud. Then, a straightforward combination of a 3D Kalman filter and the Hungarian algorithm is used for state estimation and data association. Additionally, 3D MOT datasets such as KITTI evaluate MOT methods in the 2D space and standardized 3D MOT evaluation tools are missing for a fair comparison of 3D MOT methods. Therefore, we propose a new 3D MOT evaluation tool along with three new metrics to comprehensively evaluate 3D MOT methods. We show that, although our system employs a combination of classical MOT modules, we achieve state-of-the-art 3D MOT performance on two 3D MOT benchmarks (KITTI and nuScenes). Surprisingly, although our system does not use any 2D data as inputs, we achieve competitive performance on the KITTI 2D MOT leaderboard. Our proposed system runs at a rate of 207.4 FPS on the KITTI dataset, achieving the fastest speed among all modern MOT systems. To encourage standardized 3D MOT evaluation, our code is publicly available at http://www.xinshuoweng.com/projects/AB3DMOT.
Adding a benchmark result helps the community track progress.