3260 papers • 126 benchmarks • 313 datasets
The goal of Density Estimation is to give an accurate description of the underlying probabilistic density distribution of an observable data set with unknown density. Source: Contrastive Predictive Coding Based Feature for Automatic Speaker Verification
(Image credit: Papersgraph)
These leaderboards are used to track progress in density-estimation-7
Use these libraries to find density-estimation-7 models and implementations
High quality image synthesis results are presented using diffusion probabilistic models, a class of latent variable models inspired by considerations from nonequilibrium thermodynamics, which naturally admit a progressive lossy decompression scheme that can be interpreted as a generalization of autoregressive decoding.
Glow, a simple type of generative flow using an invertible 1x1 convolution, is proposed, demonstrating that a generative model optimized towards the plain log-likelihood objective is capable of efficient realistic-looking synthesis and manipulation of large images.
This work extends the space of probabilistic models using real-valued non-volume preserving (real NVP) transformations, a set of powerful invertible and learnable transformations, resulting in an unsupervised learning algorithm with exact log-likelihood computation, exact sampling, exact inference of latent variables, and an interpretable latent space.
The importance weighted autoencoder (IWAE), a generative model with the same architecture as the VAE, but which uses a strictly tighter log-likelihood lower bound derived from importance weighting, shows empirically that IWAEs learn richer latent space representations than VAEs, leading to improved test log- likelihood on density estimation benchmarks.
This work describes an approach for increasing the flexibility of an autoregressive model, based on modelling the random numbers that the model uses internally when generating data, which is called Masked Autoregressive Flow.
This work introduces a simple modification for autoencoder neural networks that yields powerful generative models and proves that this approach is competitive with state-of-the-art tractable distribution estimators.
The gated convolutional layers in the proposed model improve the log-likelihood of PixelCNN to match the state-of-the-art performance of PixelRNN on ImageNet, with greatly reduced computational cost.
This work presents a stochastic differential equation (SDE) that smoothly transforms a complex data distribution to a known prior distribution by slowly injecting noise, and a corresponding reverse-time SDE that transforms the prior distribution back into the data distribution by Slowly removing the noise.
Adding a benchmark result helps the community track progress.