3260 papers • 126 benchmarks • 313 datasets
This task has no description! Would you like to contribute one?
(Image credit: Papersgraph)
These leaderboards are used to track progress in uncertainty-quantification-7
No benchmarks available.
Use these libraries to find uncertainty-quantification-7 models and implementations
No datasets available.
No subtasks available.
The DimeNet++ model is proposed, which is 8x faster and 10% more accurate than the original Dime net on the QM9 benchmark of equilibrium molecules, and ensembling and mean-variance estimation for uncertainty quantification are investigated with the goal of accelerating the exploration of the vast space of non-equilibrium structures.
An algorithm is presented that modifies any classifier to output a predictive set containing the true label with a user-specified probability, such as 90%, which provides a formal finite-sample coverage guarantee for every model and dataset.
Spectral-normalized Neural Gaussian Process (SNGP), a simple method that improves the distance-awareness ability of modern DNNs, by adding a weight normalization step during training and replacing the output layer with a Gaussian process and outperforms the other single-model approaches.
A novel rule-based approach for handling regression problems that carries elements from two frameworks: it provides information about the uncertainty of the parameters of interest using Bayesian inference, and it allows the incorporation of expert knowledge through rule- based systems.
This work develops a new model-based offline RL algorithm, COMBO, that regularizes the value function on out-of-support state-action tuples generated via rollouts under the learned model, and finds that it consistently performs as well or better as compared to prior offline model-free and model- based methods on widely studied offline RL benchmarks, including image-based tasks.
This hands-on introduction is aimed to provide the reader a working understanding of conformal prediction and related distribution-free uncertainty quantification techniques with one self-contained document.
This paper designs a data-driven method augmented by an effective information fusion mechanism to learn from historical data that incorporates prior knowledge from NWP by proposing a novel negative log-likelihood error (NLE) loss function.
Under the assumption that the underlying neural networks generalize well, it is proved that the deep learning MC and QMC algorithms are guaranteed to be faster than the baseline (quasi-) Monte Carlo methods.
This work empirically demonstrate that DKT outperforms several state-of-the-art algorithms in few-shot classification, and is the state of the art for cross-domain adaptation and regression.
Adding a benchmark result helps the community track progress.