3260 papers • 126 benchmarks • 313 datasets
Also known as Bayesian filtering or recursive Bayesian estimation, this task aims to perform inference on latent state-space models.
(Image credit: Papersgraph)
These leaderboards are used to track progress in sequential-bayesian-inference-7
No benchmarks available.
Use these libraries to find sequential-bayesian-inference-7 models and implementations
No datasets available.
No subtasks available.
It is proved that an ODE-based neural operator used to transport particles from a prior to its posterior after a new observation exists, and its neural parameterization can be trained in a meta-learning framework, allowing this operator to generalize across different priors, observations and to sequential Bayesian inference.
A novel sequential Monte Carlo filter is introduced which aims at efficient sampling of high-dimensional state spaces with a limited number of particles using a sequence of mappings that minimizes the Kullback-Leibler divergence between the posterior and the sequence of intermediate densities.
It is argued there are many cases where the distribution of state given measurement is better-approximated as Gaussian, especially when the dimensionality of measurements far exceeds that of states and the Bernstein—von Mises theorem applies.
The DKF has computational complexity similar to the Kalman filter, allowing it in some cases to perform much faster than particle filters with similar precision, while better accounting for nonlinear and nongaussian observation models than Kalman-based extensions.
This work establishes matrix-based conditions under which the effect of older observations diminishes over time, in a manner analogous to Polyak’s heavy ball momentum, and develops a novel optimization algorithm that considers the entire history of gradients and Hessians when forming an update.
It is argued that probabilistic models of the continual learning generative process rather than relying on sequential Bayesian inference over Bayesian neural network weights are needed to prevent catastrophic forgetting inBayesian neural networks.
A predictive digital twin approach to the health monitoring, maintenance, and management planning of civil engineering structures is proposed employing a probabilistic graphical model, which allows all relevant sources of uncertainty to be taken into account.
It is demonstrated that, across a range of task sequences, neural networks trained via sequential function-space variational inference achieve better predictive accuracy than networks trained with related methods while depending less on maintaining a set of representative points from previous tasks.
Adding a benchmark result helps the community track progress.