|
a |
|
b/README.md |
|
|
1 |
|
|
|
2 |
 |
|
|
3 |
|
|
|
4 |
**Presented by:** |
|
|
5 |
|
|
|
6 |
- Mara Graziani |
|
|
7 |
- Pre-doc researcher with HES-SO Valais & UniGe |
|
|
8 |
- mara.graziani@hevs.ch |
|
|
9 |
|
|
|
10 |
|
|
|
11 |
- Guillaume Jaume |
|
|
12 |
- Pre-doc researcher with EPFL & IBM Research |
|
|
13 |
- gja@zurich.ibm.com |
|
|
14 |
|
|
|
15 |
|
|
|
16 |
- Pushpak Pati |
|
|
17 |
- Pre-doc researcher with ETH & IBM Research |
|
|
18 |
- pus@zurich.ibm.com |
|
|
19 |
|
|
|
20 |
Welcome to the AMLD 2021 workshop about **Building Interpretable AI for Digital Pathology**. This hands-on session is created with the purpose of showcasing multiple ways in which developers may interpret automated decision making for digital pathology. |
|
|
21 |
|
|
|
22 |
<!--- |
|
|
23 |
Deep learning algorithms may hide inherent risks such as the codification of biases, weak accountability and bare transparency of the decision-making. Giving little insights about their final output, deep models are perceived by clinicians as black-boxes. |
|
|
24 |
Clinicians, on their side, are the sole people legally responsible and accountable for the diagnoses and treatment decisions. |
|
|
25 |
Providing justifications for automated predictions may have a positive impact of computer aided diagosis, for example, by increasing the uptake of automated support within the decision making process. |
|
|
26 |
--> |
|
|
27 |
|
|
|
28 |
## Schedule |
|
|
29 |
|
|
|
30 |
The workshop will take place on the 27th of April from 9:00 to 12:00 CET. |
|
|
31 |
|
|
|
32 |
| Time | Title | Presenter | |
|
|
33 |
|-----------|--------------------------------------|------------------------| |
|
|
34 |
| 9:00-9:05 | Welcome | Guillaume Jaume | |
|
|
35 |
| 9:05-9:25 | Introduction to Digital Pathology | Prof. Dr. Inti Zlobec | |
|
|
36 |
| 9:25-9:45 | Introduction to Interpretability | Mara Graziani | |
|
|
37 |
| 9:45-9:55 | Break 1 | - | |
|
|
38 |
| 9:55-10:35 | Hands-on session 1: CNNs & Concept Attribution | Mara Graziani | |
|
|
39 |
| 10:35-10:45 | Break 2 | - | |
|
|
40 |
| 10:45-11:55 | Hands-on session 2: Graph-based interpretability | Guillaume Jaume, Pushpak Pati | |
|
|
41 |
| 11:55-12:00 | Closing remarks | Pushpak Pati | |
|
|
42 |
|
|
|
43 |
## What to do before the workshop: |
|
|
44 |
|
|
|
45 |
The participants need to bring their own laptop with basic development setup. We recommend testing the following steps before starting the workshop: |
|
|
46 |
|
|
|
47 |
- Cloning the repository |
|
|
48 |
|
|
|
49 |
``` |
|
|
50 |
>> git clone https://github.com/maragraziani/interpretAI_DigiPath.git && cd interpretAI_DigiPath |
|
|
51 |
``` |
|
|
52 |
|
|
|
53 |
- Launch Jupyter Notebook |
|
|
54 |
|
|
|
55 |
``` |
|
|
56 |
>> jupyter notebook |
|
|
57 |
``` |
|
|
58 |
|
|
|
59 |
- Test opening one of the notebooks in Colab, e.g., [hands-on-session-1/feature_attribution_demo.ipynb](https://github.com/maragraziani/interpretAI_DigiPath/blob/main/hands-on-session-1/feature_attribution_demo.ipynb). |
|
|
60 |
|
|
|
61 |
|
|
|
62 |
## Content |
|
|
63 |
|
|
|
64 |
Deep learning algorithms may hide inherent risks such as the codification of biases, weak accountability and bare transparency of the decision-making. Giving little insights about their final output, deep models are perceived by clinicians as black-boxes. |
|
|
65 |
Clinicians, on their side, are the sole people legally responsible and accountable for the diagnoses and treatment decisions. |
|
|
66 |
Providing justifications for automated predictions may have a positive impact of computer aided diagosis, for example, by increasing the uptake of automated support within the decision making process. |
|
|
67 |
|
|
|
68 |
<!--- |
|
|
69 |
You have a deep learning model, may it be a Convolutional Neural Network (CNN) or a graph-network. |
|
|
70 |
Your model works on high magnification croppings of histopathology input images, also called patches or tiles. |
|
|
71 |
The main task is the classification of patches containing evidence of tumor from those without. |
|
|
72 |
This is modeled as a binary classification task with one output node and a logistic regression activation function, where 1 corresponds to the "tumor" class and 0 to the non-tumor class. |
|
|
73 |
|
|
|
74 |
Common theme: |
|
|
75 |
<li> histopathology image input: you may use any of your histopathology datasets, or public data collections |
|
|
76 |
<li> continuous or categorical output: a single node output is used for demonstration purposes. Similar applications can be derived for multiple node outputs, e.g. multi-class classification tasks. |
|
|
77 |
--> |
|
|
78 |
|
|
|
79 |
### Part 1: Interpreting 2D CNNs |
|
|
80 |
|
|
|
81 |
This part focuses on understanding the decision process on ConvNets with: |
|
|
82 |
* feature attribution: Class Activation Mapping (CAM) and its Gradient-weighted version |
|
|
83 |
* concept attribution: Regression Concept Vectors (RCV) |
|
|
84 |
|
|
|
85 |
You will work on the implementation of Gradient-weighted Class Activation Mapping (Grad-CAM) as an example of feature attribution. |
|
|
86 |
RCVs will be applied to generate complementary explanations in terms of clinically relevant measures such as nuclei area and appearance. |
|
|
87 |
|
|
|
88 |
The notebooks and instructions for this part are in the folder 2DCNNs. |
|
|
89 |
|
|
|
90 |
### Part 2: Explainable Graph-based Representations in Digital Pathology |
|
|
91 |
|
|
|
92 |
The second part of this tutorial will guide you to build **interpretable entity-based representations** of tissue regions. The motivation starts from the observation that cancer diagnosis and prognosis is driven by the distribution of histological entities, *e.g.,* cells, nuclei, tissue regions. A natural way to characterize the tissue is to represent it as a set of interacting entities, *i.e.,* a graph. Unlike most of the deep learning techniques operating at pixel-level, the entity-based analysis preserves the notion of histopathological entities, which the pathologists can relate to and reason with. Thus, explainability of the entity-graph based methodologies can be interpreted by pathologists, which can potentially lead to build trust and adoption of AI in clinical practice. Notably, the produced explanations in the entity-space are better localized, and therefore better discernible. |
|
|
93 |
|
|
|
94 |
|
|
|
95 |
## Reference papers |
|
|
96 |
|
|
|
97 |
|
|
|
98 |
``` |
|
|
99 |
@article{graziani2020, |
|
|
100 |
title = "Concept attribution: Explaining {{CNN}} decisions to physicians", |
|
|
101 |
author = "Mara Graziani, Vincent Andrearczyk, Stephane Marchand-Maillet, Henning Müller" |
|
|
102 |
booktitle = "Computers in Biology and Medicine", |
|
|
103 |
pages = "103865", |
|
|
104 |
year = "2020", |
|
|
105 |
doi = "https://doi.org/10.1016/j.compbiomed.2020.103865" |
|
|
106 |
} |
|
|
107 |
|
|
|
108 |
@inproceedings{pati2021, |
|
|
109 |
title = "Hierarchical Graph Representations in Digital Pathology", |
|
|
110 |
author = "Pushpak Pati, Guillaume Jaume, Antonio Foncubierta, Florinda Feroce, Anna Maria Anniciello, Giosuè Scognamiglio, Nadia Brancati, Maryse Fiche, Estelle Dubruc, Daniel Riccio, Maurizio Di Bonito, Giuseppe De Pietro, Gerardo Botti, Jean-Philippe Thiran, Maria Frucci, Orcun Goksel, Maria Gabrani", |
|
|
111 |
booktitle = "arXiv", |
|
|
112 |
url = "https://arxiv.org/abs/2102.11057", |
|
|
113 |
year = "2021" |
|
|
114 |
} |
|
|
115 |
|
|
|
116 |
@inproceedings{jaume2021, |
|
|
117 |
title = "Quantifying Explainers of Graph Neural Networks in Computational Pathology", |
|
|
118 |
author = "Guillaume Jaume, Pushpak Pati, Behzad Bozorgtabar, Antonio Foncubierta-Rodríguez, Florinda Feroce, Anna Maria Anniciello, Tilman Rau, Jean-Philippe Thiran, Maria Gabrani, Orcun Goksel", |
|
|
119 |
booktitle = "IEEE CVPR", |
|
|
120 |
url = "https://arxiv.org/abs/2011.12646", |
|
|
121 |
year = "2021" |
|
|
122 |
} |
|
|
123 |
``` |