3260 papers • 126 benchmarks • 313 datasets
This task has no description! Would you like to contribute one?
(Image credit: Papersgraph)
These leaderboards are used to track progress in image-generation
Use these libraries to find image-generation models and implementations
No subtasks available.
We have witnessed rapid progress on 3D-aware image synthesis, leveraging recent advances in generative visual models and neural rendering. Existing approaches how-ever fall short in two ways: first, they may lack an under-lying 3D representation or rely on view-inconsistent rendering, hence synthesizing images that are not multi-view consistent; second, they often depend upon representation network architectures that are not expressive enough, and their results thus lack in image quality. We propose a novel generative model, named Periodic Implicit Generative Adversarial Networks (π-GAN or pi-GAN), for high-quality 3D-aware image synthesis. π-GAN leverages neural representations with periodic activation functions and volumetric rendering to represent scenes as view-consistent radiance fields. The proposed approach obtains state-of-the-art results for 3D-aware image synthesis with multiple real and synthetic datasets.
A pipeline to generate Neural Radiance Fields (NeRF) of an object or a scene of a specific class, conditioned on a single input image, based on $\pi$-GAN, a generative model for unconditional 3D-aware image synthesis, which maps random latent codes to radiance fields of a class of objects.
This paper proposes a generative model for radiance fields which have recently proven successful for novel view synthesis of a single scene, and introduces a multi-scale patch-based discriminator to demonstrate synthesis of high-resolution images while training the model from unposed 2D images alone.
CIPS-3D is presented, a style-based, 3D-aware generator that is composed of a shallow NeRF network and a deep implicit neural representation (INR) network that synthesizes each pixel value independently without any spatial convolution or upsampling operation.
This work proposes a novel shading-guided generative implicit model that is able to learn a starkly improved shape representation and demonstrates improved performance of the approach on 3D shape reconstruction against existing methods, and its applicability on image relighting.
This work proposes FENeRF, a 3D-aware generator that can produce view-consistent and locally-editable portrait images and reveals that Joint learning semantics and texture helps to generate finer geometry.
GOF is proposed, a novel model based on generative radiance fields that can synthesize high-quality images with 3D consistency and simultaneously learn compact and smooth object surfaces that combines the merits of two representations in a unified framework.
A 3D-aware Semantic-Guided Generative Model (3D-SGAN) for human image synthesis, which combines a GNeRF with a texture generator that can learn 3D human representations with a photo-realistic, controllable generation.
This work proposes a novel framework, termed as VolumeGAN, for high-fidelity 3D-aware image synthesis, through explicitly learning a structural representation and a textural representation in a Generative Adversarial Network.
By gaining an understanding of radiography in 3D space, the method can be applied to radiograph bone extraction and suppression without requiring groundtruth bone labels and is the first work on radiograph view synthesis.
Adding a benchmark result helps the community track progress.