3260 papers • 126 benchmarks • 313 datasets
Facial inpainting (or face completion) is the task of generating plausible facial structures for missing pixels in a face image. ( Image credit: SymmFCNet )
(Image credit: Papersgraph)
These leaderboards are used to track progress in facial-inpainting-2
Use these libraries to find facial-inpainting-2 models and implementations
No subtasks available.
This work trained the network with an additional style loss, which made it possible to generate realistic results despite large portions of the image being removed, and is well suited for generating high-quality synthetic images using intuitive user inputs.
A one-stage model that utilizes dense combinations of dilated convolutions to obtain larger and more effective receptive fields and a novel self-guided regression loss for concentrating on uncertain areas and enhancing the semantic details is presented.
This paper demonstrates qualitatively and quantitatively that the proposed effective face completion algorithm is able to deal with a large area of missing pixels in arbitrary shapes and generate realistic face completion results.
This work proposes a dual control one-stage framework that decouples the reference image into two levels for flexible control: High-level identity information and low-level texture information, where the identity information figures out the shape of the face and the texture information depicts the component-aware texture.
It is demonstrated that PATMAT outperforms state-of-the-art models in terms of image quality, the preservation of person-specific details, and the identity of the subject, and the results suggest that PATMAT can be a promising approach for improving the quality of personalized face inpainting 2.
This paper proposes a novel approach to face swapping from the perspective of fine-grained facial editing, dubbed "editing for swapping"(E4S), which outperforms existing methods in preserving texture, shape, and lighting.
Two kinds of symmetry-enforcing modules are Leveraged to form a symmetry-consistent CNN model (i.e., SymmFCNet) for effective face completion and can generate globally consistent results on images with synthetic and real occlusions, and performs favorably against state-of-the-arts.
This work shows how simple losses are highly effective at reconstructing images for deep generators and analyzing the statistics of reconstruction errors for training versus validation images shows that pure GAN models appear to generalize well, in contrast with those using hybrid adversarial losses, which are amongst the most widely applied generative methods.
This work offers a face completion encoder-decoder, based on a convolutional operator with a gating mechanism, trained with an ample set of face occlusions, and proposes to play the occlusion game: the authors render 3D objects onto different face parts, providing precious knowledge of what the impact is of effectively removing those occLusions.
A novel recurrent neural network (RNN)-based approach for face reenactment which adjusts for both pose and expression variations and can be applied to a single image or a video sequence and uses a novel Poisson blending loss which combines Poisson optimization with perceptual loss.
Adding a benchmark result helps the community track progress.