3260 papers • 126 benchmarks • 313 datasets
This task has no description! Would you like to contribute one?
(Image credit: Papersgraph)
These leaderboards are used to track progress in event-based-motion-estimation-8
No benchmarks available.
Use these libraries to find event-based-motion-estimation-8 models and implementations
No subtasks available.
This work proposes a new globally optimal event-based motion estimation algorithm based on branch-and-bound (BnB), which solves rotational motion estimation on event streams, which supports practical applications such as video stabilisation and attitude estimation.
This work proposes an event-based processing approach for star tracking, which operates on the event stream from a star field, by using multi-resolution Hough Transforms to time-progressively integrate event data and produce accurate relative rotations.
This work demonstrates event collapse in its simplest form and proposes collapse metrics by using first principles of space–time deformation based on differential geometry and physics, which are the only effective solution against event collapse against well-posed warps in the experimental settings considered.
A novel, computationally efficient regularizer based on geometric principles to mitigate event collapse is proposed and is hoped that this work opens the door for future applications that unlocks the advantages of event cameras.
In this paper, we propose an efficient event-based motion estimation framework for various motion models. Different from previous works, we design a progressive event-to-map alignment scheme and utilize the spatio-temporal correlations to align events. In detail, we progressively align sampled events in an event batch to the time-surface map and obtain the updated motion model by minimizing a novel time-surface loss. In addition, a dynamic batch size strategy is applied to adaptively adjust the batch size so that all events in the batch are consistent with the current motion model. Our framework has three advantages: a) the progressive scheme refines motion parameters iteratively, achieving accurate motion estimation; b) within one iteration, only a small portion of events are involved in optimization, which greatly reduces the total runtime; c) the dynamic batch size strategy ensures that the constant velocity assumption always holds. We conduct comprehensive experiments to evaluate our framework on challenging high-speed scenes with three motion models: rotational, homography, and 6-DOF models. Experimental results demonstrate that our framework achieves state-of-the-art estimation accuracy and efficiency. The code is available at https://github.com/huangxueyan/PEME.
This work demonstrates reliable, purely event-based visual odometry on planar ground vehicles by employing the constrained non-holonomic motion model of Ackermann steering platforms, and extends single feature n-linearities for regular frame-based cameras to the case of quasi time-continuous event-tracks, and achieves a polynomial form via variable degree Taylor expansions.
Adding a benchmark result helps the community track progress.