Research Connect
Research PapersAboutContact

Alibi Explain: Algorithms for Explaining Machine Learning Models

Published in Journal of machine learning resear... (2021-01-01)
aionlincourseaionlincourseaionlincourseaionlincourse
Generate Graph

On This Page

  • TL;DR
  • Abstract
  • Authors
  • Datasets
  • References
TL

TL;DR

This work introduces Alibi Explain, an open-source Python library for explaining predictions of machine learning models, which features state-of-the-art explainability algorithms for classification and regression models.

Abstract

We introduce Alibi Explain , an open-source Python library for explaining predictions of machine learning models ( https://github.com/SeldonIO/alibi ). The library features state-of-the-art explainability algorithms for classification and regression models. The algorithms cover both the model-agnostic (black-box) and model-specific (white-box) setting, cater for multiple data types (tabular, text, images) and explanation scope (local and global explanations). The library exposes a unified API enabling users to work with explanations in a consistent way. Alibi adheres to best development practices featuring extensive testing of code correctness and algorithm convergence in a continuous integration environment. The library comes with extensive documentation of both usage and theoretical background of methods, and a suite of worked end-to-end use cases. Alibi aims to be a production-ready toolkit with integrations into machine learning deployment platforms such as Seldon Core and KFServing , and distributed explanation capabilities using Ray .

Authors

Janis Klaise

1 Paper

A. V. Looveren

1 Paper

G. Vacanti

1 Paper

Alexandru Coca

1 Paper

References24 items

1

PyTorch: An Imperative Style, High-Performance Deep Learning Library

Computer ScienceMathematics
2

This Paper Is Included in the Proceedings of the 12th Usenix Symposium on Operating Systems Design and Implementation (osdi '16). Tensorflow: a System for Large-scale Machine Learning Tensorflow: a System for Large-scale Machine Learning

Computer Science
3

Captum: A unified and generic model interpretability library for PyTorch

4

Monitoring and explainability of models in production

Research Impact

96

Citations

24

References

0

Datasets

4

5

From local explanations to global understanding with explainable AI for trees

6

Interpretable Machine Learning

Computer Science
7

InterpretML: A Unified Framework for Machine Learning Interpretability

8

Explainable machine learning in deployment

9

Interpretable Counterfactual Explanations Guided by Prototypes

10

Fooling Neural Network Interpretations via Adversarial Model Manipulation

11

iNNvestigate neural networks!

12

Anchors: High-Precision Model-Agnostic Explanations

13

Explanations based on the Missing: Towards Contrastive Explanations with Pertinent Negatives

14

Ray: A Distributed Framework for Emerging AI Applications

15

Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR

16

A Unified Approach to Interpreting Model Predictions

17

Axiomatic Attribution for Deep Networks

18

Visualizing the effects of predictor variables in black box supervised learning models

19

AI Explainability 360: An Extensible Toolkit for Understanding Data and Machine Learning Models

20

Distributed black-box model explanation with Ray, 2020

21

KFServing: Serverless inferencing on kubernetes

22

Project explain interim report

23

Seldon Core: A framework to deploy, manage and scale your production machine learning to thousands of models

24

ASSOCIATION FOR COMPUTING MACHINERY

Authors

Field of Study

Computer Science

Journal Information

Name

J. Mach. Learn. Res.

Volume

22

Venue Information

Name

Journal of machine learning research

Type

journal

URL

http://www.ai.mit.edu/projects/jmlr/

Alternate Names

  • Journal of Machine Learning Research
  • J mach learn res
  • J Mach Learn Res