3260 papers • 126 benchmarks • 313 datasets
Pose-guided image generation is the task of generating a new image of a person with guidance from pose information that the new image should synthesise around. ( Image credit: Coordinate-based Texture Inpainting for Pose-Guided Human Image Generation )
(Image credit: Papersgraph)
These leaderboards are used to track progress in image-generation
No benchmarks available.
Use these libraries to find image-generation models and implementations
No datasets available.
No subtasks available.
To generate more realistic texture details, a hybrid- granularity attention module is proposed to encode multi-scale fine-grained appearance features as bias terms to augment the coarse-grained prompt.
A new deep learning approach to pose-guided resynthesis of human photographs using a fully-convolutional architecture with deformable skip connections guided by the estimated correspondence field and a new inpainting method that completes the texture of the human body.
The problem of reposing an image of a human into any desired novel pose is addressed and a dense feature volume is implicitly learned from human images, which lends itself to simple and intuitive manipulation through explicit geometric warping.
Adding a benchmark result helps the community track progress.