It is shown how post-hoc interpretation methods allow for finding biases in AI systems predicting length of stay using a novel multi-modal dataset created from 1235 X-ray images with textual radiology reports annotated by human experts.
Authors
Hubert Baniecki
3 papers
P. Biecek
6 papers
Bartlomiej Sobieski
1 papers
Przemyslaw Bombi'nski
1 papers
Patryk Szatkowski
1 papers
References62 items
1
Book review: Christoph Molnar. 2020. Interpretable Machine Learning: A Guide for Making Black Box Models Explainable
2
SoK: Explainable Machine Learning in Adversarial Environments
3
Interpretable Machine Learning for Survival Analysis
4
Explainable AI for survival analysis: a median-SHAP approach
5
Ki67 is a better marker than PRAME in risk stratification of BAP1-positive and BAP1-loss uveal melanomas
6
A foundation model for generalizable disease detection from retinal images
7
survex: an R package for explaining machine learning survival models
8
Adversarial Attacks and Defenses in Explainable Artificial Intelligence: A Survey
9
Towards Evaluating Explanations of Vision Transformers for Medical Imaging
10
CoxNAM: An interpretable deep survival analysis model
11
Interpretable meta-learning of multi-omics data for survival analysis and pathway enrichment
12
MIMIC-IV, a freely accessible electronic health record dataset
13
DECAF: An interpretable deep cascading framework for ICU mortality prediction
14
DALE: Differential Accumulated Local Effects for efficient and accurate global explanations
15
SurvivalCNN: A deep learning-based method for gastric cancer survival prediction using radiological imaging data and clinicopathological variables
16
A manifesto on explainability for artificial intelligence in medicine
17
SurvSHAP(t): Time-dependent explanations of machine learning survival models
18
Algorithms to estimate Shapley value feature attributions
19
Time-to-event modeling for hospital length of stay prediction for COVID-19 patients
20
A systematic review of the prediction of hospital length of stay: Towards a unified framework
21
Evaluation of post-hoc interpretability methods in time-series classification
22
AI in health and medicine
23
Explaining artificial intelligence with visual analytics in healthcare
24
Explanatory Model Analysis: Explore, Explain, and Examine Predictive Models
25
Benchmark of filter methods for feature selection in high-dimensional gene expression survival data
26
AI recognition of patient race in medical imaging: a modelling study
27
Grouped feature importance and combined features effect plot
28
SurvNAM: The machine learning survival model explanation
29
VinDr-CXR: An open dataset of chest X-rays with radiologist’s annotations
30
mlr3proba: an R package for machine learning in survival analysis
31
Understanding Global Feature Contributions With Additive Importance Measures
32
Combining structured and unstructured data for predictive models: a deep learning approach
33
General Pitfalls of Model-Agnostic Interpretation Methods for Machine Learning Models
34
Model-agnostic feature importance and effects with dependent features: a conditional subgroup approach
35
The grammar of interactive explanatory model analysis
36
SurvLIME: A method for explaining machine learning survival models
37
Large-scale benchmark study of survival prediction methods using multi-omics data
38
Interpretability of machine learning‐based prediction models in healthcare
39
Radiological Society of North America
40
A deep survival analysis method based on ranking
41
Explaining individual predictions when features are dependent: More accurate approximations to Shapley values
42
CE-Net: Context Encoder Network for 2D Medical Image Segmentation
43
CheXpert: A Large Chest Radiograph Dataset with Uncertainty Labels and Expert Comparison
44
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
45
Predicting Inpatient Length of Stay After Brain Tumor Surgery: Developing Machine Learning Ensembles to Improve Predictive Performance.
46
Manipulating and Measuring Model Interpretability
47
All Models are Wrong, but Many are Useful: Learning a Variable's Importance by Studying an Entire Class of Prediction Models Simultaneously
48
Computational Radiomics System to Decode the Radiographic Phenotype.
49
OpenML Benchmarking Suites
50
Predicting Length of Stay among Patients Discharged from the Emergency Department—Using an Accelerated Failure Time Model
51
Visualizing the effects of predictor variables in black box supervised learning models
52
Survival analysis for high-dimensional, heterogeneous medical data: Exploring feature extraction as an alternative to feature selection
53
“Why Should I Trust You?”: Explaining the Predictions of Any Classifier
54
Length of stay prediction for clinical treatment process using temporal similarity
55
Peeking Inside the Black Box: Visualizing Statistical Learning With Plots of Individual Conditional Expectation
56
Fleischner Society: glossary of terms for thoracic imaging.
57
Greedy function approximation: A gradient boosting machine.
58
Hospital Length of Stay Prediction Based on Multi-modal Data Towards Trustworthy Human-AI Collaboration in Radiomics
59
Extracting Surrogate Decision Trees from Black-Box Models to Explain the Temporal Importance of Clinical Features in Predicting Kidney Graft Survival
60
Counterfactual Explanations for Survival Prediction of Cardiovascular ICU Patients