Diff of /README.md [000000] .. [637b40]

Switch to unified view

a b/README.md
1
# ADPKD-segmentation for determining Total Kidney Volume (TKV)
2
3
Autosomal dominant polycystic kidney disease (ADPKD) Segmentation in [PyTorch](https://github.com/pytorch/pytorch)
4
5
Project design, data management, and implementation for first version of polycystic kidney work by [Akshay Goel, MD](https://www.linkedin.com/in/akshay-goel-md/).
6
7
Follow-up work by researchers at Weill Cornell Medicine and Cornell University.
8
9
# Published in Radiology: Artificial Intelligence (RSNA) in 2022
10
11
Goel A, Shih G, Riyahi S, Jeph S, Dev H, Hu R, et al. Deployed Deep Learning Kidney Segmentation for Polycystic Kidney Disease MRI. Radiology: Artificial Intelligence. p. e210205.
12
13
Published Online:Feb 16 2022 https://doi.org/10.1148/ryai.210205
14
15
# Multiorgan Extention Published in Tomography 2022
16
17
Sharbatdaran A, Romano D, Teichman K, Dev H, Raza SI, Goel A, Moghadam MC, Blumenfeld JD, Chevalier JM, Shimonov D, Shih G, Wang Y, Prince MR. Deep Learning Automation of Kidney, Liver, and Spleen Segmentation for Organ Volume Measurements in Autosomal Dominant Polycystic Kidney Disease. Tomography. 2022; 8(4):1804-1819. https://doi.org/10.3390/tomography8040152. URL: https://www.mdpi.com/1723226
18
19
## See additional README files for more info on:
20
21
- [Training/Validation data](data/README.md)
22
- [Inference input data](inference_input/README.md)
23
- [Saved inference output files](saved_inference/README.md)
24
- [Addition ensemble extension](addition_ensemble/README.md)
25
- [Argmax ensemble and multisequence extension](argmax_ensemble/README.md)
26
27
## Preliminary Results Presented as Abstract at SIIM 2020
28
29
[Convolutional Neural Networks for Automated Segmentation of Autosomal Dominant Polycystic Kidney Disease. Oral presentation at the Society for Imaging Informatics in Medicine 2020, Austin TX](https://cdn.ymaws.com/siim.org/resource/resmgr/siim20/abstracts-research/goel_convolutional_neural_ne.pdf)
30
31
## Examples of Performance on Unseen Multi-institute External Data
32
Inference was performed by [checkpoints/inference.yml](checkpoints/inference.yml) with checkpoint (checkpoints/best_val_checkpoint.pth)
33
![Example ADPKD MRI Data](adpkd_inference_ext_50.gif)
34
35
![Multi-Insitute External Performance](external-data-performance.png)
36
37
# Steps to run:
38
39
## **Sidenote on Configs and Experiments**
40
41
- All experimental objects are defined by YAML configuration files.
42
- Configuration files are instantiated via [config_utils.py](adpkd_segmentation/config/config_utils.py).
43
- Select prior experiments can be reproduced via the coresponding config files (see [experiments/configs](experiments/configs)).
44
- Tensboard runs and Model Checkpoints for these experiments are saved (see [experiments/runs](experiments/runs)).
45
46
## **Inference**
47
48
#### 1. Install `requirements.txt` and `adpkd-segmentation` package from source.
49
50
`pip install -e . -f https://download.pytorch.org/whl/torch_stable.html`
51
52
#### A sidenote on installation
53
- `requirements.txt` and `adpkd-segmentation` is supported for `python 3.8`
54
- Specifically, the pipeline has been tested on `python 3.8.4` and `python 3.8.5`
55
- For best results, we recommend installing all packages in a `python 3.8` environment.
56
- Once `python 3.8.4` or `3.8.5` is installed, create a virtual environment with `virtualenv`
57
- You may want to reverence the virtual environment documentation:
58
- [Basic Virtual Environment Tutorial](https://packaging.python.org/en/latest/guides/installing-using-pip-and-virtual-environments/#creating-a-virtual-environment) 
59
- [Package installation for an environment](https://docs.python.org/3/library/venv.html))
60
However, we provide a short example here:
61
62
```
63
(Windows)
64
working_dir>pip install virtualenv
65
working_dir>py -3.8.4 -m Drive:\path\to\environment_name\
66
67
Powershell Activation:
68
working_dir>Drive\path\to\environment_name\Scripts\activate.ps1
69
70
Windows Command Line:
71
working_dir>Drive\path\to\environment_name\Scripts\activate
72
73
working_dir>python setup.py install
74
75
(Unix/MacOS)
76
$pip install virtualenv
77
$python3.8.4 -m venv /path/to/environment_name
78
$source /path/to/environment_name/bin/activate
79
$python
80
```
81
82
#### 2. Select an inference config file.
83
84
- To build the model for our best results use [checkpoints/inference.yml](checkpoints/inference.yml) which points to coresponding checkpoint [checkpoints/best_val_checkpoint.pth](checkpoints/best_val_checkpoint.pth)
85
86
#### 3. Run inference script:
87
88
```
89
$ python  adpkd_segmentation/inference/inference.py -h
90
usage: inference.py [-h] [--config_path CONFIG_PATH] [-i INFERENCE_PATH] [-o OUTPUT_PATH]
91
92
optional arguments:
93
  -h, --help            show this help message and exit
94
  --config_path CONFIG_PATH
95
                        path to config file for inference pipeline
96
  -i INFERENCE_PATH, --inference_path INFERENCE_PATH
97
                        path to input dicom data (replaces path in config file)
98
  -o OUTPUT_PATH, --output_path OUTPUT_PATH
99
                        path to output location
100
```
101
102
## **Training Pipeline**
103
104
#### 1. Install pip packages.
105
106
Install from `requirements.txt` (inside some virtual env):
107
108
```
109
pip install -e . -f https://download.pytorch.org/whl/torch_stable.html
110
```
111
112
#### 2. Set up data as described [here](data/README.md).
113
114
Note: Depending on the dataloader you may need to create a train / validation / test json file to indicate splits.
115
116
#### 3. Select (or create) a config file. See examples at [experiments/configs](experiments/configs)
117
118
#### 4. Run training:
119
120
```
121
$ python -m adpkd_segmentation.train --config path_to_config_yaml --makelinks
122
```
123
124
- If using a specific GPU (e.g. device 2):
125
126
```
127
$ CUDA_VISIBLE_DEVICES=2 python -m adpkd_segmentation.train --config path_to_config_yaml --makelinks
128
```
129
130
The `makelinks` flag is optional and needed only once to create symbolic links to the data.
131
132
#### 5. Evaluate:
133
134
```
135
$ python -m adpkd_segmentation.evaluate --config path_to_config_yaml --makelinks
136
```
137
138
If using a specific GPU (e.g. device 2):
139
140
```
141
$ CUDA_VISIBLE_DEVICES=2 python -m adpkd_segmentation.evaluate --config path_to_config_yaml --makelinks
142
```
143
144
For generating TKV calculations:
145
146
```
147
$ python -m adpkd_segmentation.evaluate_patients --config path_to_config_yaml --makelinks --out_path output_csv_path
148
```
149
150
## Misc:
151
152
- `multi_train.py` can be used to run multiple training runs in a sequence.
153
- `create_eval_configs.py` is a utility script to create evaluation configs from the starting training config.
154
  Also done automatically inside `train.py`.
155
156
## Contact
157
158
For questions or comments please feel free to email me at <akshay.k.goel@gmail.com>.
159
160
## Citing
161
162
[![DOI](https://zenodo.org/badge/363872703.svg)](https://zenodo.org/badge/latestdoi/363872703)
163
164
```
165
@misc{Goel:2021,
166
  Author = {Akshay Goel},
167
  Title = {ADPKD Segmentation in PyTorch},
168
  Year = {2021},
169
  Publisher = {GitHub},
170
  Journal = {GitHub repository},
171
  Howpublished = {\url{https://github.com/aksg87/adpkd-segmentation-pytorch}}
172
}
173
```
174
175
## License <a name="license"></a>
176
177
Project is distributed under MIT License
178
179
## Acknowledgement
180
181
Model architecture utilized from [Segmentation Models Pytorch](https://github.com/qubvel/segmentation_models.pytorch) by Pavel Yakubovskiy.
182
183
## **Linters and Formatters**
184
Please apply these prior to any PRs to this repository.
185
- Linter `flake8` [link](https://flake8.pycqa.org/en/latest/)
186
- Formatter `black --line-length 79` [link](https://black.readthedocs.io/en/stable/the_black_code_style/current_style.html)
187
188
If you use VSCode you can add these to your settings as follows:
189
```
190
  "python.formatting.provider": "black",
191
  "python.linting.flake8Enabled": true,
192
  "python.formatting.blackArgs": [
193
    "--experimental-string-processing",    
194
    "--line-length",
195
    "79",
196
  ],
197
```