3260 papers • 126 benchmarks • 313 datasets
Explainable Artificial Intelligence
(Image credit: Papersgraph)
These leaderboards are used to track progress in explainable-artificial-intelligence-xai-3
No benchmarks available.
Use these libraries to find explainable-artificial-intelligence-xai-3 models and implementations
No subtasks available.
The problem of Explainable AI for deep neural networks that take images as input and output a class probability is addressed and an approach called RISE that generates an importance map indicating how salient each pixel is for the model's prediction is proposed.
Three software packages designed to facilitate the exploration of model reasoning using attribution approaches and beyond are presented, aiming to promote reproducibility in the field and empower scientists and practitioners to uncover the intricacies of complex model behavior.
This paper utilizes the human tendency to ask questions to reduce the number of features to those that play a main role in the asked contrast to identify the disjoint set of rules that causes the tree to classify data points as the foil and not as the fact.
A novel Open Source audio dataset is presented consisting of 30,000 audio samples of English spoken digits which is used for classification tasks on spoken digits and speakers' biological sex and demonstrates the superior interpretability of audible explanations over visual ones in a human user study.
This paper examines the behavior of the most popular instance-level explanations under the presence of interactions, introduces a new method that detects interactions for instance- level explanations, and performs a large scale benchmark to see how frequently additive explanations may be misleading.
The established computer vision explainability principle of 'visualizing preferred inputs of neurons' is modified to make it usable transfer analysis and NLP, and TX-Ray expresses neurons as feature preference distributions to quantify fine-grained knowledge transfer or adaptation and guide human analysis.
A synthetic dataset that can be generated adhoc along with the ground-truth heatmaps for more objective quantitative assessment of different XAI methods is introduced and mabCAM is introduced as the heatmap generation method compatible with the authors' ground- Truth heatmaps.
Adding a benchmark result helps the community track progress.