1
Reliable Graph Neural Networks for Drug Discovery Under Distributional Shift
2
Mixtures of Laplace Approximations for Improved Post-Hoc Uncertainty in Deep Learning
3
Highly accurate protein structure prediction with AlphaFold
4
Measuring and Improving Model-Moderator Collaboration using Uncertainty Estimation
5
Deep learning for object detection and scene perception in self-driving cars: Survey, challenges, and open issues
6
Laplace Redux - Effortless Bayesian Deep Learning
7
Being a Bit Frequentist Improves Bayesian Neural Networks
8
A Simple Fix to Mahalanobis Distance for Improving Near-OOD Detection
9
Uncertainty Baselines: Benchmarks for Uncertainty & Robustness in Deep Learning
10
Can convolutional ResNets approximately preserve input distances? A frequency analysis perspective
11
Enhanced Isotropy Maximization Loss: Seamless and High-Performance Out-of-Distribution Detection Simply Replacing the SoftMax Loss
12
Does Your Dermatology Classifier Know What It Doesn't Know? Detecting the Long-Tail of Unseen Conditions
13
Accurate and Reliable Forecasting using Stochastic Differential Equations
14
On Feature Collapse and Deep Kernel Learning for Single Forward Pass Uncertainty
15
Energy-based Out-of-distribution Detection
16
Learnable Uncertainty under Laplace Approximations
17
Traces of Class/Cross-Class Structure Pervade Deep Learning Spectra
18
Revisiting One-vs-All Classifiers for Predictive Uncertainty and Out-of-Distribution Detection in Neural Networks
19
Sparse Gaussian Processes with Spherical Harmonic Features
20
Understanding and mitigating exploding inverses in invertible neural networks
21
Mean-Field Approximation to Gaussian-Softmax Integral with Application to Uncertainty Estimation
22
Simple and Principled Uncertainty Estimation with Deterministic Deep Learning via Distance Awareness
23
Efficient and Scalable Bayesian Neural Nets with Rank-1 Factors
24
User Utterance Acquisition for Training Task-Oriented Bots: A Review of Challenges, Techniques and Opportunities
25
Random Features for Kernel Approximation: A Survey on Algorithms, Theory, and Beyond
26
Evolution of Semantic Similarity—A Survey
27
Improved Baselines with Momentum Contrastive Learning
28
Fast Predictive Uncertainty for Classification with Bayesian Deep Networks
29
Being Bayesian, Even Just a Bit, Fixes Overconfidence in ReLU Networks
30
Bayesian Deep Learning and a Probabilistic Perspective of Generalization
31
BatchEnsemble: An Alternative Approach to Efficient Ensemble and Lifelong Learning
32
A Simple Framework for Contrastive Learning of Visual Representations
33
Randaugment: Practical automated data augmentation with a reduced search space
34
Towards neural networks that provably know when they don't know
35
Deep Ensembles: A Loss Landscape Perspective
36
Out-of-Domain Detection for Natural Language Understanding in Dialog Systems
37
An Evaluation Dataset for Intent Classification and Out-of-Scope Prediction
38
Isotropic Maximization Loss and Entropic Score: Fast, Accurate, Scalable, Unexposed, Turnkey, and Native Neural Networks Out-of-Distribution Detection.
39
Variational Autoencoders and Nonlinear ICA: A Unifying Framework
40
Likelihood Ratios for Out-of-Distribution Detection
41
Practical Deep Learning with Bayesian Principles
42
Can You Trust Your Model's Uncertainty? Evaluating Predictive Uncertainty Under Dataset Shift
43
Approximate Inference Turns Deep Networks into Gaussian Processes
44
Reliable training and estimation of variance networks
45
Residual Networks as Flows of Diffeomorphisms
46
Limitations of the Empirical Fisher Approximation
47
On Mixup Training: Improved Calibration and Predictive Uncertainty for Deep Neural Networks
48
Towards Open Intent Discovery for Conversational Text
49
A deterministic and computable Bernstein-von Mises theorem
50
Measuring Calibration in Deep Learning
51
Benchmarking Neural Network Robustness to Common Corruptions and Perturbations
52
Exploiting Excessive Invariance caused by Norm-Bounded Adversarial Robustness
53
Using Pre-Training Can Improve Model Robustness and Uncertainty
54
Why ReLU Networks Yield High-Confidence Predictions Far Away From the Training Data and How to Mitigate the Problem
55
Towards a Definition of Disentangled Representations
56
Measures of distortion for machine learning
57
Single-Model Uncertainties for Deep Learning
58
Invertible Residual Networks
59
Excessive Invariance Causes Adversarial Vulnerability
60
Universal Sentence Encoder for English
61
Towards Principled Uncertainty Estimation for Deep Neural Networks
62
Deep Anomaly Detection with Outlier Exposure
63
Sorting out Lipschitz function approximation
64
Prior Networks for Detection of Adversarial Attacks
65
A universal SNP and small-indel variant caller using deep neural networks
66
A Simple Unified Framework for Detecting Out-of-Distribution Samples and Adversarial Attacks
67
Noise Contrastive Priors for Functional Uncertainty
68
Evidential Deep Learning to Quantify Classification Uncertainty
69
Calibrating Deep Convolutional Gaussian Processes
70
Reachability Analysis of Deep Neural Networks with Provable Guarantees
71
Representing smooth functions as compositions of near-identity functions with implications for deep network optimization
72
Regularisation of neural networks by enforcing Lipschitz continuity
73
The Geometry of Random Features
74
Predictive Uncertainty Estimation via Prior Networks
75
Deep Bayesian Bandits Showdown: An Empirical Comparison of Bayesian Deep Networks for Thompson Sampling
76
i-RevNet: Deep Invertible Networks
77
Spectral Normalization for Generative Adversarial Networks
78
A Scalable Laplace Approximation for Neural Networks
79
Lipschitz-Margin Training: Scalable Certification of Perturbation Invariance for Deep Neural Networks
80
Evaluating the Robustness of Neural Networks: An Extreme Value Theory Approach
81
Training Confidence-calibrated Classifiers for Detecting Out-of-Distribution Samples
82
DOC: Deep Open Classification of Text Documents
83
SemEval-2017 Task 1: Semantic Textual Similarity Multilingual and Crosslingual Focused Evaluation
84
Adversarial Examples, Uncertainty, and Transfer Testing Robustness in Gaussian Process Hybrid Deep Networks
85
On Calibration of Modern Neural Networks
86
Enhancing The Reliability of Out-of-distribution Image Detection in Neural Networks
87
Doubly Stochastic Variational Inference for Deep Gaussian Processes
88
Improved Training of Wasserstein GANs
89
Semi-analytical approximations to statistical moments of sigmoid and softmax mappings of normal variables
90
Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles
91
Stochastic Variational Deep Kernel Learning
92
Orthogonal Random Features
93
A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks
94
Word Embeddings as Metric Recovery in Semantic Spaces
95
Concrete Problems in AI Safety
96
Density estimation using Real NVP
97
Robust Large Margin Deep Neural Networks
99
Norm-preserving Orthogonal Permutation Linear Unit Activation Functions (OPLU)
100
Towards Open Set Deep Networks
102
Probabilism, entropies and strictly proper scoring rules
103
How Can Deep Rectifier Networks Achieve Linear Separability and Preserve Distances?
104
Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning
105
Scalable Bayesian Optimization Using Deep Neural Networks
106
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
107
Breaking the Curse of Dimensionality with Convex Neural Networks
108
Deep neural networks are easily fooled: High confidence predictions for unrecognizable images
109
Scalable Variational Gaussian Process Classification
110
NICE: Non-linear Independent Components Estimation
111
ImageNet Large Scale Visual Recognition Challenge
112
Probability Models for Open Set Recognition
113
Manifold Gaussian Processes for regression
114
Finite Sample Bernstein -- von Mises Theorem for Semiparametric Problems
115
Deep Gaussian Processes
116
Visual and semantic similarity in ImageNet
117
Proper local scoring rules
118
Bayesian data analysis.
120
Reliability, sufficiency, and the decomposition of proper scores
121
Variational Learning of Inducing Variables in Sparse Gaussian Processes
122
Uniform approximation of functions with random bases
123
Bayesian probabilistic matrix factorization using Markov chain Monte Carlo
124
Reliability, Sufficiency, and the Decomposition of Proper Scores
125
Random Features for Large-Scale Kernel Machines
126
Using Deep Belief Nets to Learn Covariance Kernels for Gaussian Processes
127
Probabilistic forecasts, calibration and sharpness
128
Strictly Proper Scoring Rules, Prediction, and Estimation
129
Pattern Recognition and Machine Learning
130
Local distance preservation in the GP-LVM through back constraints
131
Advances in metric embedding theory
132
Random Projection, Margins, Kernels, and Feature-Selection
133
Game theory, maximum entropy, minimum discrepancy and robust Bayesian decision theory
134
A global geometric framework for nonlinear dimensionality reduction.
135
On the Bernstein-von Mises Theorem with Infinite Dimensional Parameters
136
Statistical Decision Theory and Bayesian Analysis, Second Edition
137
A Practical Bayesian Framework for Backpropagation Networks
138
Transforming Neural-Net Output Levels to Probability Distributions
139
Spline Models for Observational Data
140
Approximate marginal densities of nonlinear functions
141
Distinction Maximization Loss: Efficiently Improving Classification Accuracy, Uncertainty Estimation, and Out-of-Distribution Detection Simply Replacing the Loss and Calibrating
142
What is semantic distance? a review and proposed method for modeling conceptual transitions in natural language. 2022
143
Uncertainty Estimation Using a Single Deep Deterministic Neural Network - ML Reproducibility Challenge 2020
144
Deterministic Neural Networks with Appropriate Inductive Biases Capture Epistemic and Aleatoric Uncertainty
145
AugMix: A Simple Method to Improve Robustness and Uncertainty under Data Shift
146
Appendix for: How Good is the Bayes Posterior in Deep Neural Networks Really?
147
Metric Learning and Manifolds: Preserving the Intrinsic Geometry
148
Principles of Riemannian Geometry in Neural Networks
149
Lecture notes on metric embeddings
150
Probability and statistics. Pearson Education
151
Reading Digits in Natural Images with Unsupervised Feature Learning
152
Distributional measures as proxies for semantic distance: A survey
153
Viscoelastic free surface instabilities during exponential stretching
154
Convergence of Estimates Under Dimensionality Restrictions
155
Principles of mathematical analysis