3260 papers • 126 benchmarks • 313 datasets
This task has no description! Would you like to contribute one?
(Image credit: Papersgraph)
These leaderboards are used to track progress in camera-localization-10
Use these libraries to find camera-localization-10 models and implementations
This work uses prior information to improve model hypothesis search, increasing the chance of finding outlier-free minimal sets and combines neural guidance with differentiable RANSAC to build neural networks which focus on certain parts of the input data and make the output predictions as good as possible.
This work tackles the problem of event-based camera localization in a known environment, without additional sensing, using a probabilistic generative event model in a Bayesian filtering framework and proposes to use the contrast residual as a measure of how well the estimated pose of the event- based camera and the environment explain the observed events.
Panoramic Annular Localizer into which panoramic annular lens and robust deep image descriptors are incorporated is proposed in this paper and the experiments carried on the public datasets and in the field illustrate the validation of the proposed system.
This paper presents CMRNet, a realtime approach based on a Convolutional Neural Network to localize an RGB image of a scene in a map built from LiDAR data, which is the first CNN-based approach that learns to match images from a monocular camera to a given, preexisting 3D LiDar-map.
This work argues that repeatable regions are not necessarily discriminative and can therefore lead to select suboptimal keypoints, and proposes to jointly learn keypoint detection and description together with a predictor of the local descriptor discriminativeness.
A multimodal camera relocalization framework that captures ambiguities and uncertainties with continuous mixture models defined on the manifold of camera poses, using Bingham distributions and a multivariate Gaussian to model the position, with an end-to-end deep neural network.
PixLoc is introduced, a scene-agnostic neural network that estimates an accurate 6-DoF pose from an image and a 3D model, based on the direct alignment of multiscale deep features, casting camera localization as metric learning.
This paper proposes a novel method to simultaneously solve the problems of mapping and localization from a set of squared planar markers and shows that the method performs better than Structure from Motion and visual SLAM techniques.
Adding a benchmark result helps the community track progress.