3260 papers • 126 benchmarks • 313 datasets
This task has no description! Would you like to contribute one?
(Image credit: Papersgraph)
These leaderboards are used to track progress in video-stabilization-5
No benchmarks available.
Use these libraries to find video-stabilization-5 models and implementations
No datasets available.
No subtasks available.
This work presents a frame synthesis algorithm to achieve full-frame video stabilization that first estimate dense warp fields from neighboring frames and then synthesize the stabilized frame by fusing the warped contents.
This paper attempts to tackle the video stabilization problem in a deep unsupervised learning manner, which borrows the divide-and-conquer idea from traditional stabilizers while leveraging the representation power of DNNs to handle the challenges in real-world scenarios.
A model for estimating the parameters on the fly by fusing gyroscope and camera data, both readily available in modern day smartphones, is proposed and shown to outperform existing methods in robustness and insensitivity to initialization.
A 1D linear convolutional network is used to directly infer the rigid moving least squares warping which implicitly balances between the global rigidity and local flexibility and produces visually and quantitatively better results than previous real-time general video stabilization methods.
This work proposes an unsupervised deep approach to full-frame video stabilization that can generate video frames without cropping and low distortion, and utilizes frame interpolation techniques to generate in between frames, leading to reduced inter-frame jitter.
This work aims to declutter this over-complicated formulation of video stabilization with the help of a novel dataset that contains pairs of training videos with similar perspective but different motion, and verify its effectiveness by successfully learning motion blind full-frame video stabilization through employing strictly conventional generative techniques.
The experimental results on detection of flying honeybees show that by using a combination of classical computer vision techniques and CNNs, as well as synthetic training sets, the proposed approach overcomes the problems associated with direct application of CNNs to the given problem and achieves an average F1-score of 0.86 on real-world videos.
To the best of the knowledge, this is the first DNN solution that adopts both sensor data and image content for video stabilization, and it outperforms the state-of-art alternative solutions via quantitative evaluations and a user study.
Adding a benchmark result helps the community track progress.