3260 papers • 126 benchmarks • 313 datasets
Image harmonization aims to modify the color of the composited region with respect to the specific background.
(Image credit: Papersgraph)
These leaderboards are used to track progress in image-generation
Use these libraries to find image-generation models and implementations
No subtasks available.
This work presents a patch-based harmonization network consisting of novel Patch-based normalization (PN) blocks and a feature extractor based on statistical color transfer and achieves state-of-the-art results on the iHarmony4 dataset.
This work contributes an image harmonization dataset iHarmony4 by generating synthesized composite images based on COCO (resp., Adobe5k, Flickr, day2night) dataset, leading to the HCOCO sub-dataset, and proposes a new deep image harmonized method DoveNet using a novel domain verification discriminator.
This work contributes an image harmonization dataset iHarmony4 by generating synthesized composite images based on existing COCO (resp., Adobe5k, day2night) dataset, leading to the HCOCO sub-dataset.
A novel spatial-separated curve rendering network for efficient and high-resolution image harmonization for the first time and reduces more than 90% parameters compared with previous methods but still achieves the state-of-the-art performance on both synthesized iHarmony4 and real-world DIH test sets.
This work proposes a novel architecture to utilize the space of high-level features learned by a pre-trained classification network to harmonize composites and sets up a new state-of-the-art in terms of MSE and PSNR metrics.
A comprehensive survey over the sub-tasks and combined task of image composition is conducted, which summarizes the existing methods, available datasets, and common evaluation metrics.
This work proposes a novel attention module named Spatial-Separated Attention Module (S2AM) and designs a novel image harmonization framework by inserting the S2AM in the coarser low-level features of the Unet structure by two different ways.
This work considers the pre-trained StyleGAN generator as a learned loss function and utilizes its layer-wise representation to train a novel hierarchical encoder, termed as Generative Hierarchical Feature (GH-Feat), which has strong transferability to both generative and discriminative tasks.
This work forms image harmonization task as background-guided domain translation, which is regulated by well-tailored triplet losses, and uses a domain code extractor to capture the background domain information to guide the foreground harmonization.
Adding a benchmark result helps the community track progress.