3260 papers • 126 benchmarks • 313 datasets
Point-interactive colorization is a task of colorizing images given user-guided clicks containing colors (a.k.a color hints). Unlike unconditional image colorization, which is an underdetermined problem by nature, point-interactive colorization aims to generate images containing specific colors given by the user. Point-interactive colorization is evaluated by providing simulated user hints from the groundtruth color image. Following the iColoriT protocol, user hints have a size of 2x2 pixels and color is given as the average color within the 2x2 pixels.
(Image credit: Papersgraph)
These leaderboards are used to track progress in point-interactive-image-colorization-4
Use these libraries to find point-interactive-image-colorization-4 models and implementations
No subtasks available.
This work proposes a deep learning approach for user-guided image colorization, which directly maps a grayscale image, along with sparse, local user "hints" to an output colorization with a Convolutional Neural Network.
This paper proposes a method for achieving instance-aware colorization that leverages an off-the-shelf object detector to obtain cropped object images and uses an instance colorization network to extract object-level features and applies a fusion module to full object- level and image- level features to predict the final colors.
This work proposes a new Side Window Filtering (SWF) technique which aligns the window's side or corner with the pixel being processed and demonstrates that implementing the SWF principle can effectively prevent artifacts such as color leakage associated with the conventional implementation.
iColoriT is presented, a novel point-interactive colorization Vision Transformer capable of propagating user hints to relevant regions, leveraging the global receptive field of Transformers to selectively colorize relevant regions with only a few local hints.
Adding a benchmark result helps the community track progress.