3260 papers • 126 benchmarks • 313 datasets
Confidence calibration – the problem of predicting probability estimates representative of the true correctness likelihood – is important for classification models in many applications. The two common calibration metrics are Expected Calibration Error (ECE) and Maximum Calibration Error (MCE).
(Image credit: Papersgraph)
These leaderboards are used to track progress in classifier-calibration-27
Use these libraries to find classifier-calibration-27 models and implementations
No subtasks available.
Adding a benchmark result helps the community track progress.