1
Fighting the disagreement in Explainable Machine Learning with consensus
2
In Search of Verifiability: Explanations Rarely Enable Complementary Performance in AI-Advised Decision Making
3
Explanations Can Reduce Overreliance on AI Systems During Decision-Making
4
OpenXAI: Towards a Transparent Evaluation of Model Explanations
5
Which Explanation Should I Choose? A Function Approximation Perspective to Characterizing Post hoc Explanations
6
Human-AI Collaboration for UX Evaluation: Effects of Explanation and Synchronization
7
Machine Explanations and Human Understanding
8
Synthetic Benchmarks for Scientific Research in Explainable Machine Learning
9
A Survey of Human‐Centered Evaluations in Human‐Centered Machine Learning
10
Order in the Court: Explainable AI Methods Prone to Disagreement
11
Evaluating the Quality of Machine Learning Explanations: A Survey on Methods and Metrics
12
Towards the Unification and Robustness of Perturbation and Gradient Based Explanations
13
Reliable Post hoc Explanations: Modeling Uncertainty in Explainability
14
Does the Whole Exceed its Parts? The Effect of AI Explanations on Complementary Team Performance
15
Human Factors in Model Interpretability: Industry Practices, Challenges, and Needs
16
Interpreting Interpretability: Understanding Data Scientists' Use of Interpretability Tools for Machine Learning
17
"How do I fool you?": Manipulating User Trust via Misleading Black Box Explanations
18
How can we fool LIME and SHAP? Adversarial Attacks on Post hoc Explanation Methods
19
Explainable machine learning in deployment
20
On the interpretability of machine learning-based model for predicting hypertension
21
Machine Learning Interpretability: A Survey on Methods and Metrics
22
Explanations can be manipulated and geometry is to blame
23
Certifiably Robust Interpretation in Deep Learning
24
Fairwashing: the risk of rationalization
25
Global Explanations of Neural Networks: Mapping the Landscape of Predictions
26
Faithful and Customizable Explanations of Black Box Models
27
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
28
Concise Explanations of Neural Networks using Adversarial Training
29
Sanity Checks for Saliency Maps
30
Evaluating Feature Importance Estimates
31
On the Robustness of Interpretability Methods
32
RISE: Randomized Input Sampling for Explanation of Black-box Models
33
Explaining Explanations: An Overview of Interpretability of Machine Learning
34
Anchors: High-Precision Model-Agnostic Explanations
35
Manipulating and Measuring Model Interpretability
36
Learning to Explain: An Information-Theoretic Perspective on Model Interpretation
37
How do Humans Understand Explanations from Machine Learning Systems? An Evaluation of the Human-Interpretability of Explanation
38
Interpretation of Neural Networks is Fragile
39
Interpretability via Model Extraction
40
SmoothGrad: removing noise by adding noise
41
A Unified Approach to Interpreting Model Predictions
42
Learning Important Features Through Propagating Activation Differences
43
Axiomatic Attribution for Deep Networks
44
Towards A Rigorous Science of Interpretable Machine Learning
45
Mapping chemical performance on molecular structures using locally interpretable explanations
46
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
47
Interpretable Decision Sets: A Joint Framework for Description and Prediction
48
Model-Agnostic Interpretability of Machine Learning
49
“Why Should I Trust You?”: Explaining the Predictions of Any Classifier
50
Deep Residual Learning for Image Recognition
51
Character-level Convolutional Networks for Text Classification
52
Interpretable classifiers using rules and Bayesian analysis: Building a better stroke prediction model
53
Intelligible Models for HealthCare: Predicting Pneumonia Risk and Hospital 30-day Readmission
54
The Bayesian Case Model: A Generative Approach for Case-Based Reasoning and Prototype Classification
55
ImageNet Large Scale Visual Recognition Challenge
56
On the calibration of sensor arrays for pattern recognition using the minimal number of experiments
57
Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps
58
Intelligible models for classification and regression
59
Classification by Set Cover: The Prototype Vector Machine
60
To trust or to think: Cognitive forcing functions can reduce overreliance on ai in ai-assisted decision-making
61
www.image-net.org/challenges/LSVRC/index
63
Pascal Visual Object Classes
64
Which methods do you prefer, and why?
65
Since you believe that the above explanations disagree (to some extent), which explanation would you rely on? (choice between Algorithm 1 explanation, Algorithm 2 explanation
66
To what extent do you think the two explanations shown above agree or disagree with each other? (choice between Completely agree, Mostly agree, Mostly disagree
67
corpus of news articles
68
Propublica article on compas
69
How we analyzed the compas recidivism algorithm
70
We apply this framework in an extensive empirical analysis
71
Which explainability methods do you use in your day to day workflow?
72
Which data modalities do you run explainability algorithms on in your day to day workflow?
73
Do you observe disagreements between explanations output by state of the art methods in your day to day workflow?