3260 papers • 126 benchmarks • 313 datasets
Homography estimation is a technique used in computer vision and image processing to find the relationship between two images of the same scene, but captured from different viewpoints. It is used to align images, correct for perspective distortions, or perform image stitching. In order to estimate the homography, a set of corresponding points between the two images must be found, and a mathematical model must be fit to these points. There are various algorithms and techniques that can be used to perform homography estimation, including direct methods, RANSAC, and machine learning-based approaches.
(Image credit: Papersgraph)
These leaderboards are used to track progress in homography-estimation-5
Use these libraries to find homography-estimation-5 models and implementations
No subtasks available.
This paper presents a self-supervised framework for training interest point detectors and descriptors suitable for a large number of multiple-view geometry problems in computer vision and introduces Homographic Adaptation, a multi-scale, multi-homography approach for boosting interest point detection repeatability and performing cross-domain adaptation.
Two convolutional neural network architectures are presented for HomographyNet: a regression network which directly estimates the real-valued homography parameters, and a classification network which produces a distribution over quantized homographies.
A neural network conditioned on previously detected models guides a RANSAC estimator to different subsets of all measurements, thereby finding model instances one after another, and demonstrates an accuracy that is superior to state-of-the-art methods.
An unsupervised learning algorithm that trains a deep convolutional neural network to estimate planar homographies and has superior adaptability and performance compared to the corresponding supervised deep learning method.
Applying sigma-consensus, MAGSAC is proposed with no need for a user-defined sigma and improving the accuracy of robust estimation significantly, and is superior to the state-of-the-art in terms of geometric accuracy on publicly available real-world datasets for epipolar geometry and homography estimation.
This work uses regression of point positions to make UnsuperPoint end-to-end trainable and to incorporate non-maximum suppression in the model, and introduces a novel loss function to regularize network predictions to be uniformly distributed.
The proposed self-supervised keypoint learning method greatly improves the quality of feature matching and homography estimation on challenging benchmarks over the state-of-the-art.
This paper attempts to tackle the video stabilization problem in a deep unsupervised learning manner, which borrows the divide-and-conquer idea from traditional stabilizers while leveraging the representation power of DNNs to handle the challenges in real-world scenarios.
This work introduces a method that allows to extract a laparoscope holder’s actions from videos of laparoscopic interventions through a novel homography generation algorithm, outperforming classical homography estimation approaches in both, precision by , and runtime on a CPU by .
A partially differentiable keypoint detection module is presented, which outputs accurate sub-pixel keypoints and the reprojection loss is then proposed to directly optimize these sub- pixel keypoints, and the dispersity peak loss is presented for accurate keypoints regularization.
Adding a benchmark result helps the community track progress.