|
a/README.md |
|
b/README.md |
1 |
# Generalizable deep learning model for early Alzheimer’s disease detection from structural MRIs |
1 |
# Generalizable deep learning model for early Alzheimer’s disease detection from structural MRIs
|
2 |
This repository contains code for a medical [paper](https://www.nature.com/articles/s41598-022-20674-x) and a machine learning [paper](http://proceedings.mlr.press/v116/liu20a) on deep learning for dementia. |
2 |
This repository contains code for a medical [paper](https://www.nature.com/articles/s41598-022-20674-x) and a machine learning [paper](http://proceedings.mlr.press/v116/liu20a) on deep learning for dementia.
|
3 |
In the medical [paper](https://www.nature.com/articles/s41598-022-20674-x), we compared the deep learning model with volume/thickness models on external independent cohort from [NACC](https://naccdata.org/). The volume and thickness data are extracted using the Freesurfer and quality controled by radiologists. |
3 |
In the medical [paper](https://www.nature.com/articles/s41598-022-20674-x), we compared the deep learning model with volume/thickness models on external independent cohort from [NACC](https://naccdata.org/). The volume and thickness data are extracted using the Freesurfer and quality controled by radiologists. |
4 |
|
4 |
|
5 |
If you would like to access the volume and thickness data as well as the subject and scan ID, please download it from the [/Data](https://github.com/NYUMedML/CNN_design_for_AD/tree/master/Data) folder. |
5 |
If you would like to access the volume and thickness data as well as the subject and scan ID, please download it from the [/Data](https://github.com/NYUMedML/CNN_design_for_AD/tree/master/Data) folder.
|
6 |
<p float="left" align="center"> |
6 |
<p float="left" align="center">
|
7 |
<img src="https://easymedai.com/models/design-for-ad/git/ci/main/tree/overview.png?format=raw" width="800" /> |
7 |
<img src="https://easymedai.com/models/design-for-ad/git/ci/main/tree/overview.png?format=raw" width="800" />
|
8 |
<figcaption align="center"> |
8 |
<figcaption align="center">
|
9 |
Figure: Overview of the deep learning framework and performance for Alzheimer’s automatic diagnosis. (a) Deep learning framework used for automatic diagnosis. |
9 |
Figure: Overview of the deep learning framework and performance for Alzheimer’s automatic diagnosis. (a) Deep learning framework used for automatic diagnosis.
|
10 |
**Contact:** [Sheng Liu](https://shengliu66.github.io/) |
10 |
**Contact:** [Sheng Liu](https://shengliu66.github.io/)
|
11 |
</figcaption> |
11 |
</figcaption> |
12 |
|
12 |
|
13 |
|
13 |
|
14 |
<h2>Introduction</h2> |
14 |
<h2>Introduction</h2> |
15 |
|
15 |
|
16 |
In this project, we focus on how to design CNN for Alzheimer's detection. we provide evidence that |
16 |
In this project, we focus on how to design CNN for Alzheimer's detection. we provide evidence that
|
17 |
* instance normalization outperforms batch normalization |
17 |
* instance normalization outperforms batch normalization
|
18 |
* early spatial downsampling negatively affects performance |
18 |
* early spatial downsampling negatively affects performance
|
19 |
* widening the model brings consistent gains while increasing the depth does not |
19 |
* widening the model brings consistent gains while increasing the depth does not
|
20 |
* incorporating age information yields moderate improvement. |
20 |
* incorporating age information yields moderate improvement.
|
21 |
|
21 |
|
22 |
Compare with the volume/thickness model, the deep-learning model is |
22 |
Compare with the volume/thickness model, the deep-learning model is
|
23 |
* accurate |
23 |
* accurate
|
24 |
* significantly faster than the volume/thickness model in which the volumes and thickness need to be extracted beforehand. |
24 |
* significantly faster than the volume/thickness model in which the volumes and thickness need to be extracted beforehand.
|
25 |
* can also be used to forecast progression: |
25 |
* can also be used to forecast progression:
|
26 |
* relies on a wide range of regions associated with Alzheimer's disease. |
26 |
* relies on a wide range of regions associated with Alzheimer's disease.
|
27 |
* can automatically learn to identify imaging biomarkers that are predictive of Alzheimer's disease, and leverage them to achieve accurate early detection of the disease. |
27 |
* can automatically learn to identify imaging biomarkers that are predictive of Alzheimer's disease, and leverage them to achieve accurate early detection of the disease. |
28 |
|
28 |
|
29 |
|
29 |
|
30 |
Together, these insights yield an increment of approximately 14% in test accuracy over existing models. |
30 |
Together, these insights yield an increment of approximately 14% in test accuracy over existing models.
|
31 |
<!-- |
31 |
<!--
|
32 |
<p float="left" align="center"> |
32 |
<p float="left" align="center">
|
33 |
<img src="data_examples/visualization_02.png" width="200" /> |
33 |
<img src="data_examples/visualization_02.png" width="200" />
|
34 |
<img src="data_examples/visualization_01.png" width="200" /> |
34 |
<img src="data_examples/visualization_01.png" width="200" />
|
35 |
<img src="data_examples/visualization_03.png" width="200" /> |
35 |
<img src="data_examples/visualization_03.png" width="200" />
|
36 |
</p> --> |
36 |
</p> --> |
37 |
|
37 |
|
38 |
<p float="left" align="center"> |
38 |
<p float="left" align="center">
|
39 |
<img src="https://easymedai.com/models/design-for-ad/git/ci/main/tree/all_resized.gif?format=raw" width="500" /> |
39 |
<img src="https://easymedai.com/models/design-for-ad/git/ci/main/tree/all_resized.gif?format=raw" width="500" />
|
40 |
<figcaption align="center"> |
40 |
<figcaption align="center">
|
41 |
Figure 1. Visualization of the aggregated importance of each voxel (in yellow) in the deep learning model when classifying subjects into Cognitive Normal, Mild Cognitive Impairement, and Alzheimer's Disease. |
41 |
Figure 1. Visualization of the aggregated importance of each voxel (in yellow) in the deep learning model when classifying subjects into Cognitive Normal, Mild Cognitive Impairement, and Alzheimer's Disease.
|
42 |
</figcaption> |
42 |
</figcaption> |
43 |
|
43 |
|
44 |
|
44 |
|
45 |
<h2>Prerequisites</h2> |
45 |
<h2>Prerequisites</h2>
|
46 |
- Python 3.6 |
46 |
- Python 3.6
|
47 |
- PyTorch 0.4 |
47 |
- PyTorch 0.4
|
48 |
- torchvision |
48 |
- torchvision
|
49 |
- progress |
49 |
- progress
|
50 |
- matplotlib |
50 |
- matplotlib
|
51 |
- numpy |
51 |
- numpy
|
52 |
- visdom |
52 |
- visdom |
53 |
|
53 |
|
54 |
|
54 |
|
55 |
<h2>License</h2> |
55 |
<h2>License</h2>
|
56 |
This repository is licensed under the terms of the GNU AGPLv3 license. |
56 |
This repository is licensed under the terms of the GNU AGPLv3 license. |
57 |
|
57 |
|
58 |
<h2>Download ADNI data</h2> |
58 |
<h2>Download ADNI data</h2>
|
59 |
1. Request approval and register at [ADNI website](http://adni.loni.usc.edu/data-samples/access-data/) |
59 |
1. Request approval and register at [ADNI website](http://adni.loni.usc.edu/data-samples/access-data/)
|
60 |
2. Download both the scans and the clinical data. From the main page click on `PROJECTS` and `ADNI`. To download the imaging data, click on `Download` and choose `Image collections`. In the `Advanced search` tab, untick `ADNI 3` and tick `MRI` to download all the MR images. |
60 |
2. Download both the scans and the clinical data. From the main page click on `PROJECTS` and `ADNI`. To download the imaging data, click on `Download` and choose `Image collections`. In the `Advanced search` tab, untick `ADNI 3` and tick `MRI` to download all the MR images.
|
61 |
3. In the `Advanced search results` tab, click Select `All` and `Add To Collection`. Finally, in the `Data Collection` tab, select the collection you just created, tick `All` and click on `Advanced download`. We advise you to group files as 10 zip files. To download the clinical data, click on `Download` and choose `Study Data`. Select all the csv files which are present in `ALL` by ticking Select `ALL `tabular data and click Download. |
61 |
3. In the `Advanced search results` tab, click Select `All` and `Add To Collection`. Finally, in the `Data Collection` tab, select the collection you just created, tick `All` and click on `Advanced download`. We advise you to group files as 10 zip files. To download the clinical data, click on `Download` and choose `Study Data`. Select all the csv files which are present in `ALL` by ticking Select `ALL `tabular data and click Download. |
62 |
|
62 |
|
63 |
|
63 |
|
64 |
<h2>Data Preprocessing</h2> |
64 |
<h2>Data Preprocessing</h2>
|
65 |
Data Preprocessing with Clinica: |
65 |
Data Preprocessing with Clinica:
|
66 |
1. **Convert data into BIDS format**: please read the docs on [Clinica website](http://www.clinica.run/doc/DatabasesToBIDS/#adni-to-bids), and install required softwares and download the required clinical files. Note that we first preprocess the training set to generate the template and use the template to preprocess validation and test set. You can find the [link](https://drive.google.com/file/d/1KurgyjQP-KReEO0gf31xxjwE5R-xuSRB/view?usp=sharing) to download the template we used for data preprocessing. You can find the script we use to run the converter at /datasets/files: |
66 |
1. **Convert data into BIDS format**: please read the docs on [Clinica website](http://www.clinica.run/doc/DatabasesToBIDS/#adni-to-bids), and install required softwares and download the required clinical files. Note that we first preprocess the training set to generate the template and use the template to preprocess validation and test set. You can find the [link](https://drive.google.com/file/d/1KurgyjQP-KReEO0gf31xxjwE5R-xuSRB/view?usp=sharing) to download the template we used for data preprocessing. You can find the script we use to run the converter at /datasets/files:
|
67 |
``` |
67 |
```
|
68 |
run_convert.sh |
68 |
run_convert.sh
|
69 |
``` |
69 |
``` |
70 |
|
70 |
|
71 |
2. **preprocess converted and splitted data**: you can refer our scripts at /datasets/files. For training data, refer: |
71 |
2. **preprocess converted and splitted data**: you can refer our scripts at /datasets/files. For training data, refer:
|
72 |
``` |
72 |
```
|
73 |
run_adni_preprocess.sh |
73 |
run_adni_preprocess.sh
|
74 |
``` |
74 |
```
|
75 |
For val and test refer: |
75 |
For val and test refer:
|
76 |
``` |
76 |
```
|
77 |
run_adni_preprocess_val.sh |
77 |
run_adni_preprocess_val.sh
|
78 |
``` |
78 |
```
|
79 |
and |
79 |
and
|
80 |
``` |
80 |
```
|
81 |
run_adni_preprocess_test.sh |
81 |
run_adni_preprocess_test.sh
|
82 |
``` |
82 |
``` |
83 |
|
83 |
|
84 |
|
84 |
|
85 |
<h2>Examples in the preprocessed dataset</h2> |
85 |
<h2>Examples in the preprocessed dataset</h2>
|
86 |
Here are some examples of scans for each categories in our test dataset: |
86 |
Here are some examples of scans for each categories in our test dataset: |
87 |
|
87 |
|
88 |
<p align="center"> |
88 |
<p align="center">
|
89 |
<img src="https://easymedai.com/models/design-for-ad/git/ci/main/tree/data_examples/CN_example.png?format=raw" width="600" /> |
89 |
<img src="https://easymedai.com/models/design-for-ad/git/ci/main/tree/data_examples/CN_example.png?format=raw" width="600" />
|
90 |
<img src="https://easymedai.com/models/design-for-ad/git/ci/main/tree/data_examples/MCI_example.png?format=raw" width="600" /> |
90 |
<img src="https://easymedai.com/models/design-for-ad/git/ci/main/tree/data_examples/MCI_example.png?format=raw" width="600" />
|
91 |
<img src="https://easymedai.com/models/design-for-ad/git/ci/main/tree/data_examples/AD_example.png?format=raw" width="600" /> |
91 |
<img src="https://easymedai.com/models/design-for-ad/git/ci/main/tree/data_examples/AD_example.png?format=raw" width="600" />
|
92 |
</p> |
92 |
</p> |
93 |
|
93 |
|
94 |
|
94 |
|
95 |
<h2>Neural Network Training</h2> |
95 |
<h2>Neural Network Training</h2>
|
96 |
Train the network ADNI dataset: |
96 |
Train the network ADNI dataset: |
97 |
|
97 |
|
98 |
``` |
98 |
```
|
99 |
python main.py |
99 |
python main.py
|
100 |
``` |
100 |
``` |
101 |
|
101 |
|
102 |
You can create your own config files and add a **--config** flag to indicate the name of your config files. |
102 |
You can create your own config files and add a **--config** flag to indicate the name of your config files. |
103 |
|
103 |
|
104 |
|
104 |
|
105 |
<h2>Model Evaluation</h2> |
105 |
<h2>Model Evaluation</h2>
|
106 |
We provide the evaluation code in **Model_eval.ipynb**, where you can load and evaluate our trained model. The trained best model (with widening factor 8 and adding age) can be found [here](https://drive.google.com/file/d/1zU21Kin9kXg_qmj7w_u5dGOjXf1D5fa7/view?usp=sharing). |
106 |
We provide the evaluation code in **Model_eval.ipynb**, where you can load and evaluate our trained model. The trained best model (with widening factor 8 and adding age) can be found [here](https://drive.google.com/file/d/1zU21Kin9kXg_qmj7w_u5dGOjXf1D5fa7/view?usp=sharing). |
107 |
|
107 |
|
108 |
|
108 |
|
109 |
|
109 |
|
110 |
<h2>Results</h2> |
110 |
<h2>Results</h2>
|
111 |
<center> |
111 |
<center> |
112 |
|
112 |
|
113 |
| Dataset | ADNI held-out | ADNI held-out | NACC external validation | NACC external validation | |
113 |
| Dataset | ADNI held-out | ADNI held-out | NACC external validation | NACC external validation |
|
114 |
| ----------------- | -------------------- | ---------------------- | ----------------------- | ------------------------ | |
114 |
| ----------------- | -------------------- | ---------------------- | ----------------------- | ------------------------ |
|
115 |
| Model | Deep Learning model | Volume/thickness model | Deep Learning model | Volume/thickness model | |
115 |
| Model | Deep Learning model | Volume/thickness model | Deep Learning model | Volume/thickness model |
|
116 |
| Cognitively Normal | 87.59 | 84.45 | 85.12 | 80.77 | |
116 |
| Cognitively Normal | 87.59 | 84.45 | 85.12 | 80.77 |
|
117 |
| Mild Cognitive Impairment | 62.59 | 56.95 | 62.45 | 57.88 | |
117 |
| Mild Cognitive Impairment | 62.59 | 56.95 | 62.45 | 57.88 |
|
118 |
| Alzheimer’s Disease Dementia | 89.21 | 85.57 | 89.21 | 81.03 | |
118 |
| Alzheimer’s Disease Dementia | 89.21 | 85.57 | 89.21 | 81.03 |
|
119 |
</center> |
119 |
</center>
|
120 |
|
120 |
|
121 |
|
121 |
|
122 |
Table 1: Classifcation performance in ADNI held-out set and an external validation set. Area under ROC |
122 |
Table 1: Classifcation performance in ADNI held-out set and an external validation set. Area under ROC
|
123 |
curve for classifcation performance based on the learning model vs the ROI-volume/thickness model, |
123 |
curve for classifcation performance based on the learning model vs the ROI-volume/thickness model,
|
124 |
for ADNI held-out set and NACC external validation set. Deep learning model outperforms ROI-volume/ |
124 |
for ADNI held-out set and NACC external validation set. Deep learning model outperforms ROI-volume/
|
125 |
thickness-based model in all classes. Please refer [paper](https://www.nature.com/articles/s41598-022-20674-x) for more details. |
125 |
thickness-based model in all classes. Please refer [paper](https://www.nature.com/articles/s41598-022-20674-x) for more details. |
126 |
|
126 |
|
127 |
<p float="left" align="center"> |
127 |
<p float="left" align="center">
|
128 |
<img src="https://easymedai.com/models/design-for-ad/git/ci/main/tree/AD_progression_new.png?format=raw" width="800" /> |
128 |
<img src="https://easymedai.com/models/design-for-ad/git/ci/main/tree/AD_progression_new.png?format=raw" width="800" />
|
129 |
<figcaption align="center"> |
129 |
<figcaption align="center">
|
130 |
Figure: Progression analysis for MCI subjects. The subjects in the ADNI test set are divided |
130 |
Figure: Progression analysis for MCI subjects. The subjects in the ADNI test set are divided
|
131 |
into two groups based on the classifcation results of the deep learning model from their frst scan diagnosed |
131 |
into two groups based on the classifcation results of the deep learning model from their frst scan diagnosed
|
132 |
as MCI: group A if the prediction is AD, and group B if it is not. The graph shows the fraction of subjects that |
132 |
as MCI: group A if the prediction is AD, and group B if it is not. The graph shows the fraction of subjects that
|
133 |
progressed to AD at diferent months following the frst scan diagnosed as MCI for both groups. Subjects in |
133 |
progressed to AD at diferent months following the frst scan diagnosed as MCI for both groups. Subjects in
|
134 |
group A progress to AD at a signifcantly faster rate, suggesting that the features extracted by the deep-learning |
134 |
group A progress to AD at a signifcantly faster rate, suggesting that the features extracted by the deep-learning
|
135 |
model may be predictive of the transition. |
135 |
model may be predictive of the transition.
|
136 |
</figcaption> |
136 |
</figcaption>
|
137 |
<center> |
137 |
<center> |
138 |
|
138 |
|
139 |
| Method | Acc. | Balanced Acc. | Micro-AUC | Macro-AUC | |
139 |
| Method | Acc. | Balanced Acc. | Micro-AUC | Macro-AUC |
|
140 |
| ----------------- | ----------- | ----------- | ----------- | ----------- | |
140 |
| ----------------- | ----------- | ----------- | ----------- | ----------- |
|
141 |
| ResNet-18 3D | 52.4% | 53.1% | - | - | |
141 |
| ResNet-18 3D | 52.4% | 53.1% | - | - |
|
142 |
| AlexNet 3D | 57.2% | 56.2% | 75.1% | 74.2% | |
142 |
| AlexNet 3D | 57.2% | 56.2% | 75.1% | 74.2% |
|
143 |
| X 1 | 56.4% | 54.8% | 74.2% | 75.6% | |
143 |
| X 1 | 56.4% | 54.8% | 74.2% | 75.6% |
|
144 |
| X 2 | 58.4% | 57.8% | 77.2% | 76.6% | |
144 |
| X 2 | 58.4% | 57.8% | 77.2% | 76.6% |
|
145 |
| X 4 | 63.2% | 63.3% | 80.5% | 77.0% | |
145 |
| X 4 | 63.2% | 63.3% | 80.5% | 77.0% |
|
146 |
| X 8 | 66.9% | 67.9% | 82.0% | 78.5% | |
146 |
| X 8 | 66.9% | 67.9% | 82.0% | 78.5% |
|
147 |
| **X 8 + age** | 68.2% | 70.0% | 82.0% | 80.0% | |
147 |
| **X 8 + age** | 68.2% | 70.0% | 82.0% | 80.0% | |
148 |
|
148 |
|
149 |
</center> |
149 |
</center>
|
150 |
|
150 |
|
151 |
Table 2: Classifcation performance in ADNI held-out with different neural network architectures. Please refer [paper](http://proceedings.mlr.press/v116/liu20a) for more details. |
151 |
Table 2: Classifcation performance in ADNI held-out with different neural network architectures. Please refer [paper](http://proceedings.mlr.press/v116/liu20a) for more details.
|
152 |
|
152 |
|
153 |
|
153 |
|
154 |
|
154 |
|
155 |
<h2>References</h2> |
155 |
<h2>References</h2>
|
156 |
|
156 |
|
157 |
``` |
157 |
```
|
158 |
@article{liu2022generalizable, |
158 |
@article{liu2022generalizable,
|
159 |
title={Generalizable deep learning model for early Alzheimer’s disease detection from structural MRIs}, |
159 |
title={Generalizable deep learning model for early Alzheimer’s disease detection from structural MRIs},
|
160 |
author={Liu, Sheng and Masurkar, Arjun V and Rusinek, Henry and Chen, Jingyun and Zhang, Ben and Zhu, Weicheng and Fernandez-Granda, Carlos and Razavian, Narges}, |
160 |
author={Liu, Sheng and Masurkar, Arjun V and Rusinek, Henry and Chen, Jingyun and Zhang, Ben and Zhu, Weicheng and Fernandez-Granda, Carlos and Razavian, Narges},
|
161 |
journal={Scientific Reports}, |
161 |
journal={Scientific Reports},
|
162 |
volume={12}, |
162 |
volume={12},
|
163 |
number={1}, |
163 |
number={1},
|
164 |
pages={1--12}, |
164 |
pages={1--12},
|
165 |
year={2022}, |
165 |
year={2022},
|
166 |
publisher={Nature Publishing Group} |
166 |
publisher={Nature Publishing Group}
|
167 |
} |
167 |
}
|
168 |
``` |
168 |
```
|
169 |
|
169 |
|
170 |
``` |
170 |
```
|
171 |
@inproceedings{liu2020design, |
171 |
@inproceedings{liu2020design,
|
172 |
title={On the design of convolutional neural networks for automatic detection of Alzheimer’s disease}, |
172 |
title={On the design of convolutional neural networks for automatic detection of Alzheimer’s disease},
|
173 |
author={Liu, Sheng and Yadav, Chhavi and Fernandez-Granda, Carlos and Razavian, Narges}, |
173 |
author={Liu, Sheng and Yadav, Chhavi and Fernandez-Granda, Carlos and Razavian, Narges},
|
174 |
booktitle={Machine Learning for Health Workshop}, |
174 |
booktitle={Machine Learning for Health Workshop},
|
175 |
pages={184--201}, |
175 |
pages={184--201},
|
176 |
year={2020}, |
176 |
year={2020},
|
177 |
organization={PMLR} |
177 |
organization={PMLR}
|
178 |
} |
178 |
}
|
179 |
``` |
179 |
```
|