1
A Survey on Ethical Principles of AI and Implementations
2
Machine learning for human learners: opportunities, issues, tensions and threats
3
Counterfactual Explanations for Machine Learning: A Review
4
Interpretable Machine Learning - A Brief History, State-of-the-Art and Challenges
5
The Black Box, Unlocked: Predictability and Understandability in Military AI
6
An Artificial Intelligence Approach to Predict Gross Primary Productivity in the Forests of South Korea Using Satellite Remote Sensing Data
7
The European Legal Framework for Medical AI
8
The role of explainability in creating trustworthy artificial intelligence for health care: a comprehensive survey of the terminology, design choices, and evaluation strategies
9
Melody: Generating and Visualizing Machine Learning Model Summary to Understand Data and Classifiers Together
10
On quantitative aspects of model interpretability
11
A Radial Visualisation for Model Comparison and Feature Identification
12
Explainable Matrix - Visualization for Global and Local Interpretability of Random Forest Classification Ensembles
13
ViCE: visual counterfactual explanations for machine learning models
14
Proxy tasks and subjective measures can be misleading in evaluating explainable AI systems
15
Measuring the Quality of Explanations: The System Causability Scale (SCS)
16
Interpretable Machine Learning
17
Beyond Accuracy: The Role of Mental Models in Human-AI Team Performance
18
Human Evaluation of Models Built for Interpretability
19
AHNG: Representation learning on attributed heterogeneous network
20
Black Box Explanation by Learning Image Exemplars in the Latent Feature Space
21
One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques
22
Physiological Indicators for User Trust in Machine Learning with Influence Enhanced Fact-Checking
23
AI in the public interest
24
Machine Learning Interpretability: A Survey on Methods and Metrics
25
Benchmarking Attribution Methods with Relative Feature Importance
26
"Do you trust me?": Increasing User-Trust by Integrating Virtual Agents in Explainable AI Interaction Design
27
DARPA's Explainable Artificial Intelligence (XAI) Program
28
Effects of Influence on User Trust in Predictive Decision Making
29
Quantifying Model Complexity via Functional Decomposition for Better Post-hoc Interpretability
30
Causability and explainability of artificial intelligence in medicine
31
Assessing the Local Interpretability of Machine Learning Models
32
On the (In)fidelity and Sensitivity for Explanations.
33
Quantifying Interpretability and Trust in Machine Learning Systems
34
Metrics for Explainable AI: Challenges and Prospects
35
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
36
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
37
Toward Human-Understandable, Explainable AI
38
Explainable AI: The New 42?
39
How AI can be a force for good
40
From Machine Learning to Explainable AI
41
A Benchmark for Interpretability Methods in Deep Neural Networks
42
Towards Robust Interpretability with Self-Explaining Neural Networks
43
Explaining Explanations: An Overview of Interpretability of Machine Learning
44
Manipulating and Measuring Model Interpretability
45
A Survey of Methods for Explaining Black Box Models
46
Visualizing Dataflow Graphs of Deep Learning Models in TensorFlow
47
Towards better understanding of gradient-based attribution methods for Deep Neural Networks
48
Interpretable Convolutional Neural Networks
49
Effects of Uncertainty and Cognitive Load on User Trust in Predictive Decision Making
50
Interpretable & Explorable Approximations of Black Box Models
51
Methods for interpreting and understanding deep neural networks
52
Explanation in Artificial Intelligence: Insights from the Social Sciences
53
A Unified Approach to Interpreting Model Predictions
54
Understanding Black-box Predictions via Influence Functions
55
User Trust Dynamics: An Investigation Driven by Differences in System Performance
56
Axiomatic Attribution for Deep Networks
57
Towards A Rigorous Science of Interpretable Machine Learning
58
Correlation for user confidence in predictive decision making
59
Can we open the black box of AI?
60
Explanatory Preferences Shape Learning and Inference
61
Diagnostic visualization for non-expert machine learning practitioners: A design study
62
Interpretable Decision Sets: A Joint Framework for Description and Prediction
63
InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets
64
Generating Visual Explanations
65
Interactive machine learning for health informatics: when do we need the human-in-the-loop?
66
“Why Should I Trust You?”: Explaining the Predictions of Any Classifier
67
Multifaceted Feature Visualization: Uncovering the Different Types of Features Learned By Each Neuron in Deep Neural Networks
68
ModelTracker: Redesigning Performance Analysis Tools for Machine Learning
69
Be Informed and Be Involved: Effects of Uncertainty and Correlation on User's Confidence in Decision Making
70
Distilling the Knowledge in a Neural Network
71
Measurable Decision Making with GSR and Pupillary Analysis for Intelligent User Interface
72
Improved Similarity Trees and their Application to Visual Data Classification
73
An Interactive Bio-inspired Approach to Clustering and Visualizing Datasets
74
Nugget Browser: Visual Subgroup Mining and Statistical Significance Discovery in Multivariate Datasets
75
An empirical evaluation of the comprehensibility of decision table, tree and rule based predictive models
76
The XAOS Metric - Understanding Visual Complexity as Measure of Usability
77
Dissecting explanatory power
78
Reading Tea Leaves: How Humans Interpret Topic Models
79
EnsembleMatrix: interactive visualization to support machine learning with multiple classifiers
80
Visual Analytics: Scope and Challenges
81
Learning interpretable models
82
Gaining insights into support vector machine pattern classifiers using projection-based tour methods
83
Explaining Decisions Made with AI
84
Review Study of Interpretation Methods for Future Interpretable Machine Learning
85
Human-in-the-Loop Learning of Interpretable and Intuitive Representations
86
Explainable AI: Interpreting, Explaining and Visualizing Deep Learning
87
Do I Trust a Machine? Differences in User Trust Based on System Performance
88
Can we Trust Machine Learning Results? Artificial Intelligence in Safety-Critical Decision Support
89
The Mythos of Model Interpretability: In machine learning, the concept of interpretability is both important and slippery.
90
Eds.) Human and Machine Learning: Visible, Explainable, Trustworthy and Transparent; Human–Computer Interaction Series; Springer: Berlin/Heidelberg, Germany, 2018; ISBN 978-3-319-90402-3
91
Human and Machine Learning: Visible, Explainable, Trustworthy and Transparent
92
Communications in Computer and Information Science
93
Making machine learning useable by revealing internal states update - a transparent approach
94
Causality: Models, Reasoning, and Inference, 2nd ed.
95
EnsembleMatrix : Interactive Visualization to Support Machine Learning with Multiple Classifiers
96
Visualizing the Simple Bayesian Classifier