3260 papers • 126 benchmarks • 313 datasets
Visual Odometry is an important area of information fusion in which the central aim is to estimate the pose of a robot using data collected by visual sensors. Source: Bi-objective Optimization for Robust RGB-D Visual Odometry
(Image credit: Papersgraph)
These leaderboards are used to track progress in visual-odometry-18
No benchmarks available.
Use these libraries to find visual-odometry-18 models and implementations
This paper proposes the Double Sphere camera model, which well fits with large field-of-view lenses, is computationally inexpensive and has a closed-form inverse, and is evaluated using a calibration dataset with several different lenses.
Extensive experiments on the KITTI VO dataset show competitive performance to state-of-the-art methods, verifying that the end-to-end Deep Learning technique can be a viable complement to the traditional VO systems.
The TUM VI benchmark is proposed, a novel dataset with a diverse set of sequences in different scenes for evaluatingVI odometry, which provides camera images with 1024×1024 resolution at 20 Hz, high dynamic range and photometric calibration, and evaluates state-of-the-art VI odometry approaches on this dataset.
This paper proposes a sensor fusion framework to fuse local states with global sensors, which achieves locally accurate and globally drift-free pose estimation and highlights that the system is a general framework, which can easily fuse various global sensors in a unified pose graph optimization.
A novel system that explicitly disentangles scale from the network estimation, which achieves state-of-the-art results among self-supervised learning-based methods on KITTI Odometry and NYUv2 dataset and presents some interesting findings on the limitation of PoseNet-based relative pose estimation methods in terms of generalization ability.
PL-SLAM is proposed, a stereo visual SLAM system that combines both points and line segments to work robustly in a wider variety of scenarios, particularly in those where point features are scarce or not well-distributed in the image.
The experiments show that the presented approach significantly outperforms state-of-the-art direct and indirect methods in a variety of real-world settings, both in terms of tracking accuracy and robustness.
This work presents two minimal solvers for the stereo relative pose problem, specifically the case when a minimal set consists of three point or line features and each of them has three known projections on two stereo cameras.
A Bayesian sun detection model that infers a three-dimensional sun direction vector from a single RGB image and computes a principled uncertainty associated with each prediction, using a Monte Carlo dropout scheme is presented.
Adding a benchmark result helps the community track progress.