3260 papers • 126 benchmarks • 313 datasets
Confidence calibration – the problem of predicting probability estimates representative of the true correctness likelihood – is important for classification models in many applications. The two common calibration metrics are Expected Calibration Error (ECE) and Maximum Calibration Error (MCE).
(Image credit: Papersgraph)
These leaderboards are used to track progress in classifier-calibration-37
Use these libraries to find classifier-calibration-37 models and implementations
No datasets available.
No subtasks available.
Adding a benchmark result helps the community track progress.