3260 papers • 126 benchmarks • 313 datasets
Self-supervised method only with lidar point clouds as input to predict flows of each point. Self-supervised way for scene flow estimation Public leaderboard: Argoverse 2.0 Scene Flow Argoverse 2 2024 Scene Flow Challenge
(Image credit: Papersgraph)
These leaderboards are used to track progress in self-supervised-scene-flow-estimation-7
Use these libraries to find self-supervised-scene-flow-estimation-7 models and implementations
No subtasks available.
This work aims to address the above challenges and estimate scene flow from 4-D radar point clouds by leveraging self-supervised learning and a robust scene flow estimation architecture and three novel losses are bespoken designed to cope with intractable radar data.
This work proposes a novel end-to-end deep scene flow model, called PointPWC-Net, on 3D point clouds in a coarse- to-fine fashion, which shows great generalization ability on KITTI Scene Flow 2015 dataset, outperforming all previous methods.
This work proposes a metric learning approach for self-supervised scene flow estimation, where a network learns a latent metric to distinguish between points translated by flow estimations and the target point cloud.
This work presents a recurrent architecture that learns a single step of an unrolled iterative alignment procedure for refining scene flow predictions, and demonstrates iterative convergence toward the solution using strong regularization.
This work presents a method of training scene flow that uses two self-supervised losses, based on nearest neighbors and cycle consistency, which matches current state-of-the-art supervised performance using no real world annotations and exceeds state- of- the-art performance when combining the self- supervised approach with supervised learning on a smaller labeled dataset.
This work presents a new self-supervised training method and an architecture for the 3D scene flow estimation under occlusions that outperforms traditional architectures by a large margin for occluded and non-occluded scenarios.
This paper revisits the scene flow problem that relies predominantly on runtime optimization and strong regularization and includes the inclusion of a neural scene flow prior, which uses the architecture of neural networks as a new type of implicit regularizer.
The fast neural scene flow (FNSF) approach reports for the first time real-time performance comparable to learning methods, without any training or OOD bias on two of the largest open autonomous driving (AV) lidar datasets Waymo Open and Argoverse.
This work combines a self-supervised backbone with a supervised 3D detection head model that learns to utilize motion representations to distinguish dynamic objects exhibiting different movement patterns and shows the relationship between self- supervised multi-frame flow representations and single-frame3D detection hypotheses.
This work proposes Scene Flow via Distillation, a simple, scalable distillation framework that uses a label-free optimization method to produce pseudo-labels to supervise a feedforward model, and achieves state-of-the-art performance on the Argoverse 2 Self-Supervised Scene Flow Challenge while using zero human labels.
Adding a benchmark result helps the community track progress.