A method for the local and global interpretation of a black-box model on the basis of the well-known generalized additive models is proposed, which provides weights of features in the explicit form and it is simply trained.
Authors
A. Konstantinov
1 papers
L. Utkin
1 papers
References60 items
1
A Generalized Stacking for Implementing Ensembles of Gradient Boosting Machines
2
Principles and Practice of Explainable Machine Learning
3
Explainable Empirical Risk Minimization
4
Looking Deeper into Tabular LIME
5
Closed-Form Expressions for Global and Local Interpretation of Tsetlin Machines with Applications to Explaining High-Dimensional Data
6
Gradient Boosting Machine with Partially Randomized Decision Trees
7
Opportunities and Challenges in Explainable Artificial Intelligence (XAI): A Survey
8
How Interpretable and Trustworthy are GAMs?
9
Soft Gradient Boosting Machine
10
Local and Global Explanations of Agent Behavior: Integrating Strategy Summaries with Saliency Maps
11
Towards Explainable Classifiers Using the Counterfactual Approach - Global Explanations for Discovering Bias in Data
12
Global explanations for discovering bias in data
13
Explainable Deep Learning: A Field Guide for the Uninitiated
14
Neural Additive Models: Interpretable Machine Learning with Neural Nets
15
Adaptive Explainable Neural Networks (Axnns)
16
SurvLIME: A method for explaining machine learning survival models
17
GAMI-Net: An Explainable Neural Network based on Generalized Additive Models with Structured Interactions
18
GraphLIME: Local Interpretable Model Explanations for Graph Neural Networks
19
Explaining the Explainer: A First Theoretical Analysis of LIME
20
From local explanations to global understanding with explainable AI for trees
21
Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI
22
InterpretML: A Unified Framework for Machine Learning Interpretability
23
Enriching Visual with Verbal Explanations for Relational Concepts - Combining LIME with Aleph
24
ALIME: Autoencoder Based Approach for Local Interpretability
25
Locally and globally explainable time series tweaking
26
Machine Learning Interpretability: A Survey on Methods and Metrics
27
LoBEMS—IoT for Building and Energy Management Systems
28
Evaluating Explainers via Perturbation
29
Causability and explainability of artificial intelligence in medicine
30
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
31
Axiomatic Interpretability for Multiclass Additive Models
32
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
33
Techniques for interpretable machine learning
34
Ensemble learning: A survey
35
RISE: Randomized Input Sampling for Explanation of Black-box Models