3260 papers • 126 benchmarks • 313 datasets
Propagating information in processed frames to unprocessed frames
(Image credit: Papersgraph)
These leaderboards are used to track progress in video-propagation-3
No benchmarks available.
Use these libraries to find video-propagation-3 models and implementations
No datasets available.
No subtasks available.
This paper presents a video prediction-based methodology to scale up training sets by synthesizing new training samples in order to improve the accuracy of semantic segmentation networks, and introduces a novel boundary label relaxation technique that makes training robust to annotation noise and propagation artifacts along object boundaries.
This paper demonstrates how to train an appearance translation network from scratch using only a few stylized exemplars while implicitly preserving temporal consistency and leads to a video stylization framework that supports real-time inference, parallel processing, and random access to an arbitrary output frame.
This work shows that temporal consistency can be achieved by training a convolutional network on a video with Deep Video Prior (DVP), and shows its effectiveness in propagating three different types of information (color, artistic style, and object segmentation).
Adding a benchmark result helps the community track progress.