3260 papers • 126 benchmarks • 313 datasets
A method that remove temporal flickering and other artifacts from videos, in particular those introduced by (non-temporal-aware) per-frame processing
(Image credit: Papersgraph)
These leaderboards are used to track progress in video-temporal-consistency-10
No benchmarks available.
Use these libraries to find video-temporal-consistency-10 models and implementations
No datasets available.
No subtasks available.
This work shows that temporal consistency can be achieved by training a convolutional network on a video with the Deep Video Prior, and proposes a carefully designed iteratively reweighted training strategy to address the challenging multimodal inconsistency problem.
An efficient approach based on a deep recurrent network for enforcing temporal consistency in a video that can handle multiple and unseen tasks, including but not limited to artistic style transfer, enhancement, colorization, image-to-image translation and intrinsic image decomposition.
This work proposes an approach that stylizes video streams in real‐time at full HD resolutions while providing interactive consistency control and develops a lite optical‐flow network that operates at 80 Frames per second (FPS) on desktop systems with sufficient accuracy.
This work shows that temporal consistency can be achieved by training a convolutional network on a video with Deep Video Prior (DVP), and shows its effectiveness in propagating three different types of information (color, artistic style, and object segmentation).
This work proposes a general flicker removal framework that only receives a single flickering video as input without additional guidance, and achieves satisfying deflickering performance and even outperforms baselines that use extra guidance on a public benchmark.
Adding a benchmark result helps the community track progress.