|
a/README.md |
|
b/README.md |
1 |
<!-- Improved compatibility of back to top link: See: https://github.com/othneildrew/Best-README-Template/pull/73 --> |
1 |
<!-- Improved compatibility of back to top link: See: https://github.com/othneildrew/Best-README-Template/pull/73 -->
|
2 |
<a name="readme-top"></a> |
2 |
<a name="readme-top"></a>
|
3 |
<!-- |
3 |
<!--
|
4 |
*** Thanks for checking out the Best-README-Template. If you have a suggestion |
4 |
*** Thanks for checking out the Best-README-Template. If you have a suggestion
|
5 |
*** that would make this better, please fork the repo and create a pull request |
5 |
*** that would make this better, please fork the repo and create a pull request
|
6 |
*** or simply open an issue with the tag "enhancement". |
6 |
*** or simply open an issue with the tag "enhancement".
|
7 |
*** Don't forget to give the project a star! |
7 |
*** Don't forget to give the project a star!
|
8 |
*** Thanks again! Now go create something AMAZING! :D |
8 |
*** Thanks again! Now go create something AMAZING! :D
|
9 |
--> |
9 |
--> |
10 |
|
10 |
|
11 |
|
11 |
|
12 |
|
12 |
|
13 |
<!-- PROJECT SHIELDS --> |
13 |
<!-- PROJECT SHIELDS -->
|
14 |
<!-- |
14 |
<!--
|
15 |
*** I'm using markdown "reference style" links for readability. |
15 |
*** I'm using markdown "reference style" links for readability.
|
16 |
*** Reference links are enclosed in brackets [ ] instead of parentheses ( ). |
16 |
*** Reference links are enclosed in brackets [ ] instead of parentheses ( ).
|
17 |
*** See the bottom of this document for the declaration of the reference variables |
17 |
*** See the bottom of this document for the declaration of the reference variables
|
18 |
*** for contributors-url, forks-url, etc. This is an optional, concise syntax you may use. |
18 |
*** for contributors-url, forks-url, etc. This is an optional, concise syntax you may use.
|
19 |
*** https://www.markdownguide.org/basic-syntax/#reference-style-links |
19 |
*** https://www.markdownguide.org/basic-syntax/#reference-style-links
|
20 |
--> |
20 |
--> |
21 |
|
21 |
|
22 |
<!-- PROJECT LOGO --> |
22 |
|
23 |
<br /> |
23 |
|
24 |
<div align="center"> |
|
|
25 |
<a href="https://github.com/othneildrew/Best-README-Template"> |
|
|
26 |
<img src="images/logo.png" alt="Logo" width="80" height="80"> |
|
|
27 |
</a> |
|
|
28 |
|
|
|
29 |
<h3 align="center">Smooth Attention for Deep Multiple Instance |
24 |
<h3 align="center">Smooth Attention for Deep Multiple Instance
|
30 |
Learning: Application to CT Intracranial |
25 |
Learning: Application to CT Intracranial
|
31 |
Hemorrhage Detection</h3> |
26 |
Hemorrhage Detection</h3> |
32 |
|
27 |
|
33 |
<p align="center"> |
28 |
<p align="center">
|
34 |
The codes for the paper accepted in MICCAI 2023. |
29 |
The codes for the paper accepted in MICCAI 2023.
|
35 |
</p> |
30 |
</p>
|
36 |
</div> |
31 |
</div> |
37 |
|
32 |
|
38 |
|
33 |
|
39 |
|
34 |
|
40 |
<!-- TABLE OF CONTENTS --> |
35 |
<!-- TABLE OF CONTENTS -->
|
41 |
<details> |
36 |
<details>
|
42 |
<summary>Table of Contents</summary> |
37 |
<summary>Table of Contents</summary>
|
43 |
<ol> |
38 |
<ol>
|
44 |
<li> |
39 |
<li>
|
45 |
<a href="#introduction">Introduction</a> |
40 |
<a href="#introduction">Introduction</a>
|
46 |
</li> |
41 |
</li>
|
47 |
<li> |
42 |
<li>
|
48 |
<a href="#getting-started">Getting Started</a> |
43 |
<a href="#getting-started">Getting Started</a>
|
49 |
<ul> |
44 |
<ul>
|
50 |
<li><a href="#prerequisites">Prerequisites</a></li> |
45 |
<li><a href="#prerequisites">Prerequisites</a></li>
|
51 |
<li><a href="#installation">Installation</a></li> |
46 |
<li><a href="#installation">Installation</a></li>
|
52 |
</ul> |
47 |
</ul>
|
53 |
</li> |
48 |
</li>
|
54 |
<li><a href="#usage">Usage</a></li> |
49 |
<li><a href="#usage">Usage</a></li>
|
55 |
</ol> |
50 |
</ol>
|
56 |
</details> |
51 |
</details> |
57 |
|
52 |
|
58 |
|
53 |
|
59 |
|
54 |
|
60 |
<!-- INTRODUCTION --> |
55 |
<!-- INTRODUCTION -->
|
61 |
## Introduction |
56 |
## Introduction |
62 |
|
57 |
|
63 |
Multiple Instance Learning (MIL) has been widely applied to medical imaging diagnosis, where bag labels are known and instance labels inside bags are unknown. Traditional MIL assumes that instances in each bag are independent samples from a given distribution. However, instances are often spatially or sequentially ordered, and one would expect similar diagnostic importance for neighboring instances. To address this, in this study, we propose a smooth attention deep MIL (SA-DMIL) model. Smoothness is achieved by the introduction of first and second order constraints on the latent function encoding the attention paid to each instance in a bag. The method is applied to the detection of intracranial hemorrhage (ICH) on head CT scans. |
58 |
Multiple Instance Learning (MIL) has been widely applied to medical imaging diagnosis, where bag labels are known and instance labels inside bags are unknown. Traditional MIL assumes that instances in each bag are independent samples from a given distribution. However, instances are often spatially or sequentially ordered, and one would expect similar diagnostic importance for neighboring instances. To address this, in this study, we propose a smooth attention deep MIL (SA-DMIL) model. Smoothness is achieved by the introduction of first and second order constraints on the latent function encoding the attention paid to each instance in a bag. The method is applied to the detection of intracranial hemorrhage (ICH) on head CT scans.
|
64 |
The results show that this novel SA-DMIL: (a) achieves better performance than the non-smooth attention MIL at both scan (bag) and slice (instance) levels; (b) learns spatial dependencies between slices; and (c) outperforms current state-of-the-art MIL methods on the same ICH test set. |
59 |
The results show that this novel SA-DMIL: (a) achieves better performance than the non-smooth attention MIL at both scan (bag) and slice (instance) levels; (b) learns spatial dependencies between slices; and (c) outperforms current state-of-the-art MIL methods on the same ICH test set. |
65 |
|
60 |
|
66 |
<p align="right">(<a href="#readme-top">back to top</a>)</p> |
61 |
<p align="right">(<a href="#readme-top">back to top</a>)</p> |
67 |
|
62 |
|
68 |
|
63 |
|
69 |
<!-- GETTING STARTED --> |
64 |
<!-- GETTING STARTED -->
|
70 |
## Getting Started |
65 |
## Getting Started |
71 |
|
66 |
|
72 |
This is an example of how you may give instructions on setting up your project locally. |
67 |
This is an example of how you may give instructions on setting up your project locally.
|
73 |
To get a local copy up and running follow these simple example steps. |
68 |
To get a local copy up and running follow these simple example steps. |
74 |
|
69 |
|
75 |
### Prerequisites |
70 |
### Prerequisites |
76 |
|
71 |
|
77 |
This is an example of how to list things you need to use the software and how to install them. |
72 |
This is an example of how to list things you need to use the software and how to install them.
|
78 |
* The codes use Tensorflow and you can download all packages in [requirements.txt](https://github.com/YunanWu2168/SA-MIL/blob/master/requirements.txt). |
73 |
* The codes use Tensorflow and you can download all packages in [requirements.txt](https://github.com/YunanWu2168/SA-MIL/blob/master/requirements.txt). |
79 |
|
74 |
|
80 |
``` |
75 |
```
|
81 |
matplotlib==3.7.1 |
76 |
matplotlib==3.7.1 |
82 |
|
77 |
|
83 |
numpy==1.22.4 |
78 |
numpy==1.22.4 |
84 |
|
79 |
|
85 |
opencv_contrib_python==4.7.0.72 |
80 |
opencv_contrib_python==4.7.0.72 |
86 |
|
81 |
|
87 |
opencv_python==4.7.0.72 |
82 |
opencv_python==4.7.0.72 |
88 |
|
83 |
|
89 |
opencv_python_headless==4.7.0.72 |
84 |
opencv_python_headless==4.7.0.72 |
90 |
|
85 |
|
91 |
pandas==1.5.3 |
86 |
pandas==1.5.3 |
92 |
|
87 |
|
93 |
scikit_learn==1.2.2 |
88 |
scikit_learn==1.2.2 |
94 |
|
89 |
|
95 |
tensorflow==2.12.0 |
90 |
tensorflow==2.12.0
|
96 |
``` |
91 |
``` |
97 |
|
92 |
|
98 |
### Installation |
93 |
### Installation |
99 |
|
94 |
|
100 |
1. Pip install [requirements.txt](https://github.com/YunanWu2168/SA-MIL/blob/master/requirements.txt) |
95 |
1. Pip install [requirements.txt](https://github.com/YunanWu2168/SA-MIL/blob/master/requirements.txt)
|
101 |
```sh |
96 |
```sh
|
102 |
pip install requirements.txt |
97 |
pip install requirements.txt
|
103 |
``` |
98 |
```
|
104 |
2. Open SA-MIL-preprocessing.ipynb - How to process head CTs |
99 |
2. Open SA-MIL-preprocessing.ipynb - How to process head CTs
|
105 |
3. Open SA-MIL-training.ipynb - Train SA-DMIL |
100 |
3. Open SA-MIL-training.ipynb - Train SA-DMIL
|
106 |
4. Open Non-SA-MIL-training.ipynb - Train Att-MIL |
101 |
4. Open Non-SA-MIL-training.ipynb - Train Att-MIL
|
107 |
5. Open SA-MIL-testing.ipynb - Test SA-DMIL and Att-MIL |
102 |
5. Open SA-MIL-testing.ipynb - Test SA-DMIL and Att-MIL
|
108 |
6. Open vis-SA-MIL.ipynb - Visualize at slice level |
103 |
6. Open vis-SA-MIL.ipynb - Visualize at slice level |
109 |
|
104 |
|
110 |
<p align="right">(<a href="#readme-top">back to top</a>)</p> |
105 |
<p align="right">(<a href="#readme-top">back to top</a>)</p> |
111 |
|
106 |
|
112 |
## Dataset |
107 |
## Dataset |
113 |
|
108 |
|
114 |
### Download |
109 |
### Download
|
115 |
The dataset used in this paper can be download via [Kaggle Challenge Dataset](https://www.kaggle.com/competitions/rsna-intracranial-hemorrhage-detection/data) |
110 |
The dataset used in this paper can be download via [Kaggle Challenge Dataset](https://www.kaggle.com/competitions/rsna-intracranial-hemorrhage-detection/data) |
116 |
|
111 |
|
117 |
### CT Slice Image Processing with Windowing |
112 |
### CT Slice Image Processing with Windowing
|
118 |
[SA_MIL_preprocessing.ipynb](https://github.com/YunanWu2168/SA-MIL/blob/master/SA_MIL_preprocessing.ipynb) |
113 |
[SA_MIL_preprocessing.ipynb](https://github.com/YunanWu2168/SA-MIL/blob/master/SA_MIL_preprocessing.ipynb) |
119 |
|
114 |
|
120 |
<!-- USAGE EXAMPLES --> |
115 |
<!-- USAGE EXAMPLES -->
|
121 |
## Usage |
116 |
## Usage |
122 |
|
117 |
|
123 |
1. Model training at scan level |
118 |
1. Model training at scan level |
124 |
|
119 |
|
125 |
[SA_MIL_training.ipynb](https://github.com/YunanWu2168/SA-MIL/blob/master/SA_MIL_training.ipynb) |
120 |
[SA_MIL_training.ipynb](https://github.com/YunanWu2168/SA-MIL/blob/master/SA_MIL_training.ipynb) |
126 |
|
121 |
|
127 |
[Non_SA_MIL_training.ipynb](https://github.com/YunanWu2168/SA-MIL/blob/master/Non_SA_MIL_training.ipynb) |
122 |
[Non_SA_MIL_training.ipynb](https://github.com/YunanWu2168/SA-MIL/blob/master/Non_SA_MIL_training.ipynb) |
128 |
|
123 |
|
129 |
2. Model testing at scan level |
124 |
2. Model testing at scan level |
130 |
|
125 |
|
131 |
[SA_MIL_testing.ipynb](https://github.com/YunanWu2168/SA-MIL/blob/master/SA_MIL_testing.ipynb) |
126 |
[SA_MIL_testing.ipynb](https://github.com/YunanWu2168/SA-MIL/blob/master/SA_MIL_testing.ipynb) |
132 |
|
127 |
|
133 |
3. Model testing at slice level |
128 |
3. Model testing at slice level |
134 |
|
129 |
|
135 |
[vis_SA_MIL.ipynb](https://github.com/YunanWu2168/SA-MIL/blob/master/vis_SA_MIL.ipynb) |
130 |
[vis_SA_MIL.ipynb](https://github.com/YunanWu2168/SA-MIL/blob/master/vis_SA_MIL.ipynb) |
136 |
|
131 |
|
137 |
<p align="right">(<a href="#readme-top">back to top</a>)</p> |
132 |
<p align="right">(<a href="#readme-top">back to top</a>)</p> |
138 |
|
133 |
|
139 |
|
134 |
|
140 |
<!-- LICENSE --> |
135 |
<!-- LICENSE -->
|
141 |
## License |
136 |
## License |
142 |
|
137 |
|
143 |
Distributed under the MIT License. See `LICENSE.txt` for more information. |
138 |
Distributed under the MIT License. See `LICENSE.txt` for more information. |
144 |
|
139 |
|
145 |
<p align="right">(<a href="#readme-top">back to top</a>)</p> |
140 |
<p align="right">(<a href="#readme-top">back to top</a>)</p> |
146 |
|
141 |
|