3260 papers • 126 benchmarks • 313 datasets
Blind face restoration aims at recovering high-quality faces from the low-quality counterparts suffering from unknown degradation, such as low-resolution, noise, blur, compression artifacts, etc. When applied to real-world scenarios, it becomes more challenging, due to more complicated degradation, diverse poses and expressions. Description source: Towards Real-World Blind Face Restoration with Generative Facial Prior Image source: Towards Real-World Blind Face Restoration with Generative Facial Prior
(Image credit: Papersgraph)
These leaderboards are used to track progress in blind-face-restoration-8
Use these libraries to find blind-face-restoration-8 models and implementations
No subtasks available.
It is demonstrated that DeblurGAN-V2 has very competitive performance on several popular benchmarks, in terms of deblurring quality (both objective and subjective), as well as efficiency, and is effective for general image restoration tasks too.
HiFaceGAN, a multi-stage framework containing several nested CSR units that progressively replenish facial details based on the hierarchical semantic guidance extracted from the front-end content-adaptive suppression modules, is presented.
This work proposes a new method by first learning a GAN for high-quality face image generation and embedding it into a U-shaped DNN as a prior decoder, then fine-tuning the GAN prior embedded DNN with a set of synthesized low- quality face images.
An effective baseline model called Swin Transformer U-Net (STUNet) is developed and experimental results show that the proposed baseline method performs favourably against the SOTA methods on various BFR tasks.
This work proposes a novel method named DifFace that is capable of coping with unseen and complex degradations more gracefully without complicated loss designs and can contract the error of the restoration backbone and thus makes this method more robust to unknown degradations.
Experiments show that the GFRNet not only performs favorably against the state-of-the-art image and face restoration methods, but also generates visually photo-realistic results on real degraded facial images.
A novel approach is proposed, called mGANprior, to incorporate the well-trained GANs as effective prior to a variety of image processing tasks, by employing multiple latent codes to generate multiple feature maps at some intermediate layer of the generator and composing them with adaptive channel importance to recover the input image.
In many real-world face restoration applications, e.g., smartphone photo albums and old films, multiple high-quality (HQ) images of the same person usually are available for a given degraded low-quality (LQ) observation. However, most existing guided face restoration methods are based on single HQ exemplar image, and are limited in properly exploiting guidance for improving the generalization ability to unknown degradation process. To address these issues, this paper suggests to enhance blind face restoration performance by utilizing multi-exemplar images and adaptive fusion of features from guidance and degraded images. First, given a degraded observation, we select the optimal guidance based on the weighted affine distance on landmark sets, where the landmark weights are learned to make the guidance image optimized to HQ image reconstruction. Second, moving least-square and adaptive instance normalization are leveraged for {spatial} alignment and illumination translation of guidance image in the feature space. Finally, for better feature fusion, multiple adaptive spatial feature fusion (ASFF) layers are introduced to incorporate guidance features in an adaptive and progressive manner, resulting in our ASFFNet. Experiments show that our ASFFNet performs favorably in terms of quantitative and qualitative evaluation, and is effective in generating photo-realistic results on real-world LQ images. The source code and models are available at https://github.com/csxmli2016/ASFFNet.
A deep face dictionary network (termed as DFDNet) to guide the restoration process of degraded observations and can achieve plausible performance in both quantitative and qualitative evaluation, and can generate realistic and promising results on real degraded images without requiring an identity-belonging reference.
A new progressive semantic-aware style transformation framework, named PSFR-GAN, for face restoration, which makes full use of the semantic and pixel space information from different scales of input pairs and pretrain a face parsing network which can generate decent parsing maps from real-world LQ face images.
Adding a benchmark result helps the community track progress.