3260 papers • 126 benchmarks • 313 datasets
This task has no description! Would you like to contribute one?
(Image credit: Papersgraph)
These leaderboards are used to track progress in fairness-28
Use these libraries to find fairness-28 models and implementations
This work presents a simple yet effective approach termed as FairMOT based on the anchor-free object detection architecture CenterNet, which outperforms the state-of-the-art methods by a large margin on several public datasets.
A new open source Python toolkit for algorithmic fairness, AI Fairness 360 (AIF360), released under an Apache v2.0 license to help facilitate the transition of fairness research algorithms to use in an industrial setting and to provide a common framework for fairness researchers to share and evaluate algorithms.
This paper develops a novel post-hoc visual explanation method called Score-CAM based on class activation mapping that outperforms previous methods on both recognition and localization tasks, it also passes the sanity check.
It is deduced that several testing protocols for the recognition are not fair and that the neural networks are learning patterns in the dataset that are not correlated to the presence of COVID-19.
This work builds ELEVATER (Evaluation of Language-augmented Visual Task-level Transfer), the first benchmark and toolkit for evaluating (pre-trained) language-AUgmented visual models.
This paper presents the first in-depth experimental demonstration of fair transfer learning and demonstrates empirically that the authors' learned representations admit fair predictions on new tasks while maintaining utility, an essential goal of fair representation learning.
This work proposes a new framework of agnostic federated learning, where the centralized model is optimized for any target distribution formed by a mixture of the client distributions, and shows that this framework naturally yields a notion of fairness.
This work introduces and derive theoretical results for a training procedure based on adversarial networks for enforcing the pivotal property (or, equivalently, fairness with respect to continuous attributes) on a predictive model and includes a hyperparameter to control the trade-off between accuracy and robustness.
It is proved that the computational problem of auditing subgroup fairness for both equality of false positive rates and statistical parity is equivalent to the problem of weak agnostic learning, which means it is computationally hard in the worst case, even for simple structured subclasses.
In general, the Kearns et al. algorithm converges quickly, large gains in fairness can be obtained with mild costs to accuracy, and that optimizing accuracy subject only to marginal fairness leads to classifiers with substantial subgroup unfairness.
Adding a benchmark result helps the community track progress.