3260 papers • 126 benchmarks • 313 datasets
This task has no description! Would you like to contribute one?
(Image credit: Papersgraph)
These leaderboards are used to track progress in normalising-flows-9
No benchmarks available.
Use these libraries to find normalising-flows-9 models and implementations
No datasets available.
No subtasks available.
Normalising flows (NFS) map two density functions via a differentiable bijection whose Jacobian determinant can be computed efficiently. Recently, as an alternative to hand-crafted bijections, Huang et al. (2018) proposed neural autoregressive flow (NAF) which is a universal approximator for density functions. Their flow is a neural network (NN) whose parameters are predicted by another NN. The latter grows quadratically with the size of the former and thus an efficient technique for parametrization is needed. We propose block neural autoregressive flow (B-NAF), a much more compact universal approximator of density functions, where we model a bijection directly using a single feed-forward network. Invertibility is ensured by carefully designing each affine transformation with block matrices that make the flow autoregressive and (strictly) monotone. We compare B-NAF to NAF and other established flows on density estimation and approximate inference for latent variable models. Our proposed flow is competitive across datasets while using orders of magnitude fewer parameters.
It is proved that a flow must become arbitrarily numerically noninvertible in order to approximate the target closely, and proposed Continuously Indexed Flows (CIFs) are proposed, which replace the single bijection used by normalising flows with a continuously indexed family of bijections.
The experimental results indicate that the proposed framework can successfully train deep SCMs that are capable of all three levels of Pearl's ladder of causation: association, intervention, and counterfactuals, giving rise to a powerful new approach for answering causal questions in imaging applications and beyond.
The method, Bayes by Hypernet, is able to model a richer variational distribution than previous methods and achieves comparable predictive performance on the MNIST classification task while providing higher predictive uncertainties compared to MC-Dropout and regular maximum likelihood training.
Experiments show that a system based on the proposed neural HMM TTS with normalising flows for describing the highly non-Gaussian distribution of speech acoustics needs fewer updates than comparable methods to produce accurate pronunciations and a subjective speech quality close to natural speech.
This work proposes an extension to state space models of time series data based on a novel generative model for latent temporal states: the neural moving average model, which permits a subsequence to be sampled without drawing from the entire distribution, enabling training iterations to use mini-batches of the time series at low computational cost.
VFlow achieves a new state-of-the-art 2.98 bits per dimension on the CIFAR-10 dataset and is more compact than previous models to reach similar modeling quality.
Woodbury transformations are introduced, which achieve efficient invertibility via the Woodbury matrix identity and efficient determinant calculation via Sylvester's determinant identity and allow learning of higher-likelihood models than other flow architectures while still enjoying their efficiency advantages.
Adding a benchmark result helps the community track progress.