3260 papers • 126 benchmarks • 313 datasets
The fundamental goal of example-based Texture Synthesis is to generate a texture, usually larger than the input, that faithfully captures all the visual characteristics of the exemplar, yet is neither identical to it, nor exhibits obvious unnatural looking artifacts. Source: Non-Stationary Texture Synthesis by Adversarial Expansion
(Image credit: Papersgraph)
These leaderboards are used to track progress in texture-synthesis-1
No benchmarks available.
Use these libraries to find texture-synthesis-1 models and implementations
No subtasks available.
A new model of natural textures based on the feature spaces of convolutional neural networks optimised for object recognition is introduced, showing that across layers the texture representations increasingly capture the statistical properties of natural images while making object information more and more explicit.
A combination of generative Markov random field models and discriminatively trained deep convolutional neural networks for synthesizing 2D images, yielding results far out of reach of classic generative MRF methods.
It is shown that the image generation with PSGANs has properties of a texture manifold: it can smoothly interpolate between samples in the structured noise space and generate novel samples, which lie perceptually between the textures of the original dataset.
This paper first gives a mathematical explanation of the source of instabilities in many previous approaches, and then improves these instabilities by using histogram losses to synthesize textures that better statistically match the exemplar.
A simple modification to that representation of pair-wise products of features in a convolutional network is proposed which makes it possible to incorporate long-range structure into image generation, and to render images that satisfy various symmetry constraints.
This work proposes a novel application of automated texture synthesis in combination with a perceptual loss focusing on creating realistic textures rather than optimizing for a pixelaccurate reproduction of ground truth images during training to achieve a significant boost in image quality at high magnification ratios.
A novel two-stream network for image inpainting is proposed, which models the structure-constrained texture synthesis and texture-guided structure reconstruction in a coupled manner so that they better leverage each other for more plausible generation.
This is the first successful completely data-driven texture synthesis method based on GANs, and has the following features which make it a state of the art algorithm for texture synthesis: high image quality of the generated textures, very high scalability w.r.t. the output texture size, fast real-time forward generation.
This work demonstrates learning a texture generator from a single template image, and makes qualitative claims that the behaviour exhibited by the NCA model is a learned, distributed, local algorithm to generate a texture, setting this method apart from existing work on texture generation.
Adding a benchmark result helps the community track progress.