3260 papers • 126 benchmarks • 313 datasets
A prediction interval is an estimate of an interval in which a future observation will fall, with a certain probability, given what has already been observed. Prediction intervals are often used in regression analysis.
(Image credit: Papersgraph)
These leaderboards are used to track progress in prediction-intervals-11
No benchmarks available.
Use these libraries to find prediction-intervals-11 models and implementations
No subtasks available.
A general framework for distribution-free predictive inference in regression, using conformal inference, which allows for the construction of a prediction band for the response variable using any estimator of the regression function, and a model-free notion of variable importance, called leave-one-covariate-out or LOCO inference.
A novel procedure with provably small regret over all local time intervals of a given width is developed by modifying the adaptive conformal inference (ACI) algorithm to contain an additional step in which the step-size parameter of ACI's gradient descent update is tuned over time.
It is shown that there are prediction tasks for which the model can gain both computational efficiency and prediction accuracy by allowing the model to make predictions at a sampling rate which it can choose itself.
A method to build distribution-free prediction intervals for time-series based on conformal inference that wraps around any ensemble estimator to construct sequential prediction intervals is developed, which is easy to implement, scalable to producing arbitrarily many prediction intervals sequentially, and well-suited to a wide range of regression functions.
A comprehensive R package that integrates 16 methods to build prediction intervals with random forests and boosted forests is developed, which shows that the proposed method is very competitive and, globally, outperforms competing methods.
Non-parametric bootstrapped uncertainty estimates and SHAP values are used to provide explainable uncertainty estimation as a technique that aims to monitor the deterioration of machine learning models in deployment environments, as well as determine the source of model deteri- oration when target labels are not available.
This work argues that Adaptive Conformal Inference (ACI), developed for distribution-shift time series, is a good procedure for time series with general dependency, and proposes a parameter-free method, AgACI, that adaptively builds upon ACI based on online expert aggregation.
Uncertainty quantification techniques are developed with rigorous statistical guarantees for image-to-image regression problems that derive uncertainty intervals around each pixel that are guaranteed to contain the true value with a user-specified confidence probability.
Adding a benchmark result helps the community track progress.