Switch to unified view

a/README.md b/README.md
1
# DABC-Net
1
# DABC-Net
2
2
3
DABC-Net toolkit is designed for fast and robust pneumonia segmentation and prediction of COVID-19 progression on chest CT scans. 
3
DABC-Net toolkit is designed for fast and robust pneumonia segmentation and prediction of COVID-19 progression on chest CT scans. 
4
The core of the toolkit, DABC-Net, is a novel deep learning (DL) network that combines a 2D U-net for intra-slice spatial information processing, and a recurrent LSTM network to leverage inter-slice context. 
4
The core of the toolkit, DABC-Net, is a novel deep learning (DL) network that combines a 2D U-net for intra-slice spatial information processing, and a recurrent LSTM network to leverage inter-slice context. 
5
Compared to other popular volumetric segmentation networks such as 3D U-net, DABC-Net is much faster and more robust to CT scans with various slice thickness. 
5
Compared to other popular volumetric segmentation networks such as 3D U-net, DABC-Net is much faster and more robust to CT scans with various slice thickness. 
6
6
7
Based on DABC-Net segmentation, we can predict the disease progression, i.e. whether a specific patient will develop into a severe stage or not using his/her first two CT scans. 
7
Based on DABC-Net segmentation, we can predict the disease progression, i.e. whether a specific patient will develop into a severe stage or not using his/her first two CT scans. 
8
This repository provides an implementation of DABC-Net (including graphical user interface), which can be potentially used to support early triage of severe patients
8
This repository provides an implementation of DABC-Net (including graphical user interface), which can be potentially used to support early triage of severe patients
9
9
10
<b>The main features:</b>
10
<b>The main features:</b>
11
* Ready-to-use (You can run our toolkit with GUI and even no need to install Tensorflow or Python interpreter on your computer.)
11
* Ready-to-use (You can run our toolkit with GUI and even no need to install Tensorflow or Python interpreter on your computer.)
12
* Run everywhere (Desktop app, Web or console)
12
* Run everywhere (Desktop app, Web or console)
13
* Data Anonymization by deleting CT header file
13
* Data Anonymization by deleting CT header file
14
* Fast segmentation
14
* Fast segmentation
15
* Built-in multi-types uncertainty
15
* Built-in multi-types uncertainty
16
* Prediction of patient progression: mild vs severe
16
* Prediction of patient progression: mild vs severe
17
* Support for Covid-19 longitudinal study
17
* Support for Covid-19 longitudinal study
18
18
19
## Table of Contents
19
## Table of Contents
20
* [Installation](#installation)
20
* [Installation](#installation)
21
* [Quick start](#quick-start)
21
* [Quick start](#quick-start)
22
    + [DABC-Net for desktop app](#dabc-net-for-desktop-app)
22
    + [DABC-Net for desktop app](#dabc-net-for-desktop-app)
23
    + [DABC-Net for Colab](#dabc-net-for-colab)  [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Robin970822/DABC-Net-for-COVID-19/blob/master/DABC_pipeline_demo.ipynb)
23
    + [DABC-Net for Colab](#dabc-net-for-colab)  [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Robin970822/DABC-Net-for-COVID-19/blob/master/DABC_pipeline_demo.ipynb)
24
    + [DABC-Net for Website](#dabc-net-for-website)
24
    + [DABC-Net for Website](#dabc-net-for-website)
25
* [Progress prediction](#progress-prediction)
25
* [Progress prediction](#progress-prediction)
26
    + [Model](#model)
26
    + [Model](#model)
27
    + [Usage](#usage)   
27
    + [Usage](#usage)   
28
    + [Visualization of progress](#visualization-of-progress)
28
    + [Visualization of progress](#visualization-of-progress)
29
* [Data](#data)
29
* [Data](#data)
30
* [Tutorial](#tutorial)
30
* [Tutorial](#tutorial)
31
31
32
## Installation
32
## Installation
33
If you run this toolkit with packaged desktop app, you can skip this step.
33
If you run this toolkit with packaged desktop app, you can skip this step.
34
34
35
An Nvidia GPU is needed for faster inference (about 16ms/slice on 1080ti gpu).
35
An Nvidia GPU is needed for faster inference (about 16ms/slice on 1080ti gpu).
36
36
37
Requirements:
37
Requirements:
38
38
39
* tensorflow-gpu == 1.15.4
39
* tensorflow-gpu == 1.15.4
40
* Keras == 2.2.4
40
* Keras == 2.2.4
41
* numpy == 1.16
41
* numpy == 1.16
42
* scikit-learn == 0.21.3
42
* scikit-learn == 0.21.3
43
* scikit-image == 0.14
43
* scikit-image == 0.14
44
* xgboost == 1.1.0
44
* xgboost == 1.1.0
45
* simpleitk == 2.0
45
* simpleitk == 2.0
46
* scipy == 1.1
46
* scipy == 1.1
47
47
48
Install dependencies:
48
Install dependencies:
49
49
50
```
50
```
51
cd path/to/repository/
51
cd path/to/repository/
52
pip install -r requirement.txt
52
pip install -r requirement.txt
53
```
53
```
54
The project folder looks like this:
54
The project folder looks like this:
55
55
56
```
56
```
57
path
57
path
58
├─ ... (Codes in DABC-Net-for-COVID-19 repository. Use download.sh to get following files) 
58
├─ ... (Codes in DABC-Net-for-COVID-19 repository. Use download.sh to get following files) 
59
59

60
├─Input_data
60
├─Input_data
61
│      2020034797_0123_2949_20200123015940_4.nii.gz
61
│      2020034797_0123_2949_20200123015940_4.nii.gz
62
│      2020034797_0125_3052_20200125111145_4.nii.gz
62
│      2020034797_0125_3052_20200125111145_4.nii.gz
63
│      ...
63
│      ...
64
64

65
├─Output_data
65
├─Output_data
66
│   │
66
│   │
67
│   ├─covid
67
│   ├─covid
68
│   │      2020034797_0123_2949_20200123015940_4.nii.gz
68
│   │      2020034797_0123_2949_20200123015940_4.nii.gz
69
│   │      2020034797_0125_3052_20200125111145_4.nii.gz
69
│   │      2020034797_0125_3052_20200125111145_4.nii.gz
70
│   │      ...
70
│   │      ...
71
│   │
71
│   │
72
│   ├─lung
72
│   ├─lung
73
│   │      2020034797_0123_2949_20200123015940_4.nii.gz
73
│   │      2020034797_0123_2949_20200123015940_4.nii.gz
74
│   │      2020034797_0125_3052_20200125111145_4.nii.gz
74
│   │      2020034797_0125_3052_20200125111145_4.nii.gz
75
│   │      ...
75
│   │      ...
76
│   │
76
│   │
77
│   └─uncertainty
77
│   └─uncertainty
78
│           2020034797_0123_2949_20200123015940_4_predictive_aleatoric.nii.gz
78
│           2020034797_0123_2949_20200123015940_4_predictive_aleatoric.nii.gz
79
│           2020034797_0125_3052_20200125111145_4_sample_1.nii.gz
79
│           2020034797_0125_3052_20200125111145_4_sample_1.nii.gz
80
│           ...
80
│           ...
81
81

82
├─weight
82
├─weight
83
│       model_05090017
83
│       model_05090017
84
│       ...
84
│       ...
85
85

86
│ (following folders are required if you need longitudinal study)
86
│ (following folders are required if you need longitudinal study)
87
87

88
├─meta
88
├─meta
89
│       2020035021.csv
89
│       2020035021.csv
90
90

91
└─model
91
└─model
92
        prediction.pkl
92
        prediction.pkl
93
        ...
93
        ...
94
94
95
```
95
```
96
96
97
## Quick Start
97
## Quick Start
98
98
99
### DABC-Net for desktop app
99
### DABC-Net for desktop app
100
#### Inference:
100
#### Inference:
101
1. Download and double click the DABC_Net.exe(Windows) or DABC_Mac(Mac OS) file. 
101
1. Download and double click the DABC_Net.exe(Windows) or DABC_Mac(Mac OS) file. 
102
You can run our network even without installing Tensorflow or Python interpreter on you computer. 
102
You can run our network even without installing Tensorflow or Python interpreter on you computer. 
103
The UI looks like this:
103
The UI looks like this:
104
104
105
   ![Alt text](fig/fig1.png "fig.1")
105
106
107
2. Type or select the input folder where you store nii/nii.gz format CT scans data. The output results will be saved in the folder you specified.
106
2. Type or select the input folder where you store nii/nii.gz format CT scans data. The output results will be saved in the folder you specified.
108
107
109
3. Choose the sform code name,  the default value is 'NIFTI_XFORM_SCANNER_ANAT'. Some scans without complete header files may loss this value(e.g. data from radiopaedia.org).In this case, please remember select sform name as 'OTHERS'. For more details about header files, please see this [site](https://brainder.org/2012/09/23/the-nifti-file-format/  "With a Title"). 
108
3. Choose the sform code name,  the default value is 'NIFTI_XFORM_SCANNER_ANAT'. Some scans without complete header files may loss this value(e.g. data from radiopaedia.org).In this case, please remember select sform name as 'OTHERS'. For more details about header files, please see this [site](https://brainder.org/2012/09/23/the-nifti-file-format/  "With a Title"). 
110
109
111
4. Click 'Run' button. After all the inference done, the progress bar window will be closed. 
110
4. Click 'Run' button. After all the inference done, the progress bar window will be closed. 
112
   
111
   
113
   ![fig.2](fig/fig2.png)   
112
   ![fig.2](fig/fig2.png)   
114
   
113
   
115
   Here are some examples:
114
   Here are some examples:
116
   ![fig.4](fig/fig4.png )
115
   ![fig.4](fig/fig4.png )
117
116
118
#### Uncertainty:
117
#### Uncertainty:
119
118
120
In DABC-Net, we approximate Bayersian inference using [DropBlock](http://papers.nips.cc/paper/8271-dropblock-a-regularization-method-for-convolutional-networks), a form of Monte Carlo dropout. For more details about aleatory and epistemic uncertainty, please refer to this [paper](https://pdfs.semanticscholar.org/146f/8844a380191a3f883c3584df3d7a6a56a999.pdf).
119
In DABC-Net, we approximate Bayersian inference using [DropBlock](http://papers.nips.cc/paper/8271-dropblock-a-regularization-method-for-convolutional-networks), a form of Monte Carlo dropout. For more details about aleatory and epistemic uncertainty, please refer to this [paper](https://pdfs.semanticscholar.org/146f/8844a380191a3f883c3584df3d7a6a56a999.pdf).
121
120
122
1. Follow instructions from item(1-3) in the section above.
121
1. Follow instructions from item(1-3) in the section above.
123
122
124
2. Choose sample times (integer, e.g. 10). The network will sample 10 times to  compute aleatory/epistemic uncertainty and get mean prediction outcome as final segmentation.
123
2. Choose sample times (integer, e.g. 10). The network will sample 10 times to  compute aleatory/epistemic uncertainty and get mean prediction outcome as final segmentation.
125
124
126
3. Set 'threshold' to get binary output if you need. The default value is 0.5. If you want to save raw probability map from the last sigmoid activation layer of the network, just set threshold  to 0.
125
3. Set 'threshold' to get binary output if you need. The default value is 0.5. If you want to save raw probability map from the last sigmoid activation layer of the network, just set threshold  to 0.
127
126
128
4. 'Method' denotes what kind of uncertainty you  want to save.
127
4. 'Method' denotes what kind of uncertainty you  want to save.
129
128
130
   ![](fig/fig3.png)
129
   ![](fig/fig3.png)
131
130
132
   Here are some examples:
131
   Here are some examples:
133
132
134
   ![](fig/fig5.png)
133
   ![](fig/fig5.png)
135
134
136
#### Visualization:
135
#### Visualization:
137
136
138
* Raw: original CT scan
137
* Raw: original CT scan
139
* Lung: output of lung segmentation(optional)
138
* Lung: output of lung segmentation(optional)
140
* Lesion: output of lesion segmentation
139
* Lesion: output of lesion segmentation
141
140
142
Then, choose appropriate HU range (e.g. -1024~512) via right slide window.
141
Then, choose appropriate HU range (e.g. -1024~512) via right slide window.
143
142
144
![](fig/tool_visual.png)
143
![](fig/tool_visual.png)
145
144
146
#### Progress predict:
145
#### Progress predict:
147
* Meta data: Csv format data. Put the path of data or click 'Demo' button to get an example.
146
* Meta data: Csv format data. Put the path of data or click 'Demo' button to get an example.
148
* Method: Use First two scans / First three scans / First scan to predict the progress of disease.
147
* Method: Use First two scans / First three scans / First scan to predict the progress of disease.
149
* Output path: (optional) Results will be saved to a text file. If this value is empty, file will save in working directory. 
148
* Output path: (optional) Results will be saved to a text file. If this value is empty, file will save in working directory. 
150
149
151
![](fig/predict_ui.png)
150
![](fig/predict_ui.png)
152
151
153
152
154
## DABC-Net for Colab
153
## DABC-Net for Colab
155
154
156
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Robin970822/DABC-Net-for-COVID-19/blob/master/DABC_pipeline_demo.ipynb)
155
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Robin970822/DABC-Net-for-COVID-19/blob/master/DABC_pipeline_demo.ipynb)
157
156
158
#### Inference:
157
#### Inference:
159
1. Put your data in a folder.
158
1. Put your data in a folder.
160
2. Select the input and output folder, and run following command:
159
2. Select the input and output folder, and run following command:
161
```
160
```
162
DABC_infer(input_path, output_path, usage, sform_code)
161
DABC_infer(input_path, output_path, usage, sform_code)
163
```
162
```
164
- nii_path: 
163
- nii_path: 
165
    - Input: Folder path of input data(nii or nii.gz format).
164
    - Input: Folder path of input data(nii or nii.gz format).
166
    - Type: string
165
    - Type: string
167
- save_path: 
166
- save_path: 
168
    - Input: Folder path of output data(nii or nii.gz format). The segmentation results will be saved as nii.gz format.
167
    - Input: Folder path of output data(nii or nii.gz format). The segmentation results will be saved as nii.gz format.
169
    - Type: string 
168
    - Type: string 
170
- usage:
169
- usage:
171
   - Input: Inference type.
170
   - Input: Inference type.
172
   - Type: string, 'lung' or 'covid'(default)
171
   - Type: string, 'lung' or 'covid'(default)
173
- sform_code:
172
- sform_code:
174
   - Input: Coordinate system. In general, some scans without header files(e.g. data from radiopaedia.org) got 'NIFTI_XFORM_UNKNOWN' sform code.
173
   - Input: Coordinate system. In general, some scans without header files(e.g. data from radiopaedia.org) got 'NIFTI_XFORM_UNKNOWN' sform code.
175
   - Type: bool, 1 for 'NIFTI_XFORM_SCANNER_ANAT'(default) or 0 for 'OTHERS'.
174
   - Type: bool, 1 for 'NIFTI_XFORM_SCANNER_ANAT'(default) or 0 for 'OTHERS'.
176
   
175
   
177
#### Uncertainty:
176
#### Uncertainty:
178
```
177
```
179
DABC_uncertainty(nii_filename, save_filename, sample_value, uncertainty, sform_code)
178
DABC_uncertainty(nii_filename, save_filename, sample_value, uncertainty, sform_code)
180
```
179
```
181
- nii_filename: 
180
- nii_filename: 
182
    - Input: Path of input data(nii or nii.gz format).
181
    - Input: Path of input data(nii or nii.gz format).
183
    - Type: string
182
    - Type: string
184
- save_filename:
183
- save_filename:
185
    - Input: Folder path of output data(nii or nii.gz format).
184
    - Input: Folder path of output data(nii or nii.gz format).
186
    - Type: string 
185
    - Type: string 
187
- sample_value:
186
- sample_value:
188
   - Input: number of Monte carlo samples.
187
   - Input: number of Monte carlo samples.
189
   - Type: int
188
   - Type: int
190
- uncertainty:
189
- uncertainty:
191
   - Input: Choose uncertainty. The results will be saved as nii.gz format.
190
   - Input: Choose uncertainty. The results will be saved as nii.gz format.
192
   - Type: string, 'Predictive','Aleatoric','Epistemic' or 'Both'
191
   - Type: string, 'Predictive','Aleatoric','Epistemic' or 'Both'
193
- sform_code:
192
- sform_code:
194
   - Input: Coordinate system. In general, some scans without header files(e.g. data from radiopaedia.org) got 'NIFTI_XFORM_UNKNOWN' sform code.
193
   - Input: Coordinate system. In general, some scans without header files(e.g. data from radiopaedia.org) got 'NIFTI_XFORM_UNKNOWN' sform code.
195
   - Type: bool, 1 for 'NIFTI_XFORM_SCANNER_ANAT'(default) or 0 for 'OTHERS'.
194
   - Type: bool, 1 for 'NIFTI_XFORM_SCANNER_ANAT'(default) or 0 for 'OTHERS'.
196
  
195
  
197
196
198
For more detail, please refer to [notebook](https://colab.research.google.com/github/Robin970822/DABC-Net-for-COVID-19/blob/master/DABC_pipeline_demo.ipynb).
197
For more detail, please refer to [notebook](https://colab.research.google.com/github/Robin970822/DABC-Net-for-COVID-19/blob/master/DABC_pipeline_demo.ipynb).
199
198
200
##  DABC-Net for Website
199
##  DABC-Net for Website
201
200
202
201
203
- [ ] Update by Dec 20
202
- [ ] Update by Dec 20
204
203
205
# Progress prediction
204
# Progress prediction
206
205
207
## Model
206
## Model
208
### Feature
207
### Feature
209
Feature we used:
208
Feature we used:
210
209
211
| Feature                   | Scan |
210
| Feature                   | Scan |
212
| ------------------------- | ------------ |
211
| ------------------------- | ------------ |
213
| Left lesion volume        | scan0 & scan1|
212
| Left lesion volume        | scan0 & scan1|
214
| Left lung volume          | scan0 & scan1|
213
| Left lung volume          | scan0 & scan1|
215
| Left lesion ratio         | scan0 & scan1|
214
| Left lesion ratio         | scan0 & scan1|
216
| Left consolidation volume | scan0 & scan1|
215
| Left consolidation volume | scan0 & scan1|
217
| Left weighted volume      | scan0 & scan1|
216
| Left weighted volume      | scan0 & scan1|
218
| Left z-position           | scan0 & scan1|
217
| Left z-position           | scan0 & scan1|
219
| Right lesion volume       | scan0 & scan1|
218
| Right lesion volume       | scan0 & scan1|
220
| Right lung volume         | scan0 & scan1|
219
| Right lung volume         | scan0 & scan1|
221
| Right lesion ratio        | scan0 & scan1|
220
| Right lesion ratio        | scan0 & scan1|
222
| Right consolidation volume| scan0 & scan1|
221
| Right consolidation volume| scan0 & scan1|
223
| Right weighted volume     | scan0 & scan1|
222
| Right weighted volume     | scan0 & scan1|
224
| Right z-position          | scan0 & scan1|
223
| Right z-position          | scan0 & scan1|
225
| Age                       | scan0 & scan1|
224
| Age                       | scan0 & scan1|
226
| Sex                       | scan0 & scan1|
225
| Sex                       | scan0 & scan1|
227
226
228
### Base learner
227
### Base learner
229
Base learners we used:
228
Base learners we used:
230
229
231
| Base learner | MinMaxScaler Necessity | Feature Importance |
230
| Base learner | MinMaxScaler Necessity | Feature Importance |
232
| ------------ | ---------------------- | ------------------ |
231
| ------------ | ---------------------- | ------------------ |
233
|SVM            | True  | False |
232
|SVM            | True  | False |
234
|MLP            | True  | False |
233
|MLP            | True  | False |
235
|Logistic Regression    | True  | False |
234
|Logistic Regression    | True  | False |
236
|Naive Bayes        | False | False |
235
|Naive Bayes        | False | False |
237
|Random Forest      | False | True  |
236
|Random Forest      | False | True  |
238
|Adaboost       | False | True  |
237
|Adaboost       | False | True  |
239
|Gradient Boost     | False | True  |
238
|Gradient Boost     | False | True  |
240
|XGBoost        | False | True  |
239
|XGBoost        | False | True  |
241
240
242
#### MinMaxScalar
241
#### MinMaxScalar
243
For base learners sensitive to data normalization(svm, mlp, ...), we provide the min max normalization based on our training dataset. The weights without min max scalar (TODO) are also provided with fewer base learners and lower performance.
242
For base learners sensitive to data normalization(svm, mlp, ...), we provide the min max normalization based on our training dataset. The weights without min max scalar (TODO) are also provided with fewer base learners and lower performance.
244
243
245
## Usage
244
## Usage
246
### Prediction
245
### Prediction
247
```
246
```
248
pred = predict_base_learners(base_learners, feature)
247
pred = predict_base_learners(base_learners, feature)
249
```
248
```
250
- base_learners: 
249
- base_learners: 
251
   - Input: Trained base learners.
250
   - Input: Trained base learners.
252
   - Type: dict, shape: {key: learner}, key: name of learner, learner: sklearn learner.
251
   - Type: dict, shape: {key: learner}, key: name of learner, learner: sklearn learner.
253
- feature: 
252
- feature: 
254
   - Input: Preprocessed features.
253
   - Input: Preprocessed features.
255
   - Type: array, shape: m x n, m: number of samples, n: number of features.
254
   - Type: array, shape: m x n, m: number of samples, n: number of features.
256
- pred: 
255
- pred: 
257
   - Output: Probability predicted of base learners. 
256
   - Output: Probability predicted of base learners. 
258
   - Type: array, shape: m x k, m: number of samples, k: number of base learners.
257
   - Type: array, shape: m x k, m: number of samples, k: number of base learners.
259
258
260
## Visualization of progress
259
## Visualization of progress
261
Here are some examples:
260
Here are some examples:
262
261
263
#### Progression curve of severe patient:
262
#### Progression curve of severe patient:
264
263
265
![](fig/progress_curve_severe.png)
264
![](fig/progress_curve_severe.png)
266
265
267
#### Progression curve of mild patient:
266
#### Progression curve of mild patient:
268
267
269
![](fig/progress_curve_mild.png)
268
![](fig/progress_curve_mild.png)
270
269
271
x-axis: time(day), y-axis: lesion ratio
270
x-axis: time(day), y-axis: lesion ratio
272
271
273
#####  Visualization of different time point scans
272
#####  Visualization of different time point scans
274
273
275
![](fig/progress_severe.png)
274
![](fig/progress_severe.png)
276
275
277
![](fig/progress_mild.png)
276
![](fig/progress_mild.png)
278
277
279
# Data
278
# Data
280
279
281
Dataset with Expert Annotations and Benchmark
280
Dataset with Expert Annotations and Benchmark
282
* [1] - Ma Jun, Ge Cheng, Wang Yixin, An Xingle, Gao Jiantao, … He Jian. (2020). COVID-19 CT Lung and Infection Segmentation Dataset (Version Verson 1.0) [Data set]. Zenodo. [DOI](https://zenodo.org/record/3757476)
281
* [1] - Ma Jun, Ge Cheng, Wang Yixin, An Xingle, Gao Jiantao, … He Jian. (2020). COVID-19 CT Lung and Infection Segmentation Dataset (Version Verson 1.0) [Data set]. Zenodo. [DOI](https://zenodo.org/record/3757476)
283
282
284
Data Sources
283
Data Sources
285
* [2] - Paiva, O., 2020. CORONACASES.ORG - Helping Radiologists To Help People In More Than 100 Countries! \| Coronavirus Cases - 冠状病毒病例. [online] Coronacases.org. Available at: [link](https://Coronacases.org) [Accessed 20 March 2020].
284
* [2] - Paiva, O., 2020. CORONACASES.ORG - Helping Radiologists To Help People In More Than 100 Countries! \| Coronavirus Cases - 冠状病毒病例. [online] Coronacases.org. Available at: [link](https://Coronacases.org) [Accessed 20 March 2020].
286
* [3] - Glick, Y., 2020. Viewing Playlist: COVID-19 Pneumonia \| Radiopaedia.Org. [online] Radiopaedia.org. Available at: [link](https://Radiopaedia.org) [Accessed 20 April 2020].
285
* [3] - Glick, Y., 2020. Viewing Playlist: COVID-19 Pneumonia \| Radiopaedia.Org. [online] Radiopaedia.org. Available at: [link](https://Radiopaedia.org) [Accessed 20 April 2020].
287
286
288
# Notes
287
# Notes
289
288
290
Acknowledgements: We thank [COVID-19-CT-Seg-Benchmark repository](https://github.com/JunMa11/COVID-19-CT-Seg-Benchmark) for providing covid-19 segmentation dataset and benchmark. We also thank this [repository](https://github.com/EdwinZhang1970/Python/tree/master/tkinter-pack%20Demo) for providing us ideas for designing ui.
289
Acknowledgements: We thank [COVID-19-CT-Seg-Benchmark repository](https://github.com/JunMa11/COVID-19-CT-Seg-Benchmark) for providing covid-19 segmentation dataset and benchmark. We also thank this [repository](https://github.com/EdwinZhang1970/Python/tree/master/tkinter-pack%20Demo) for providing us ideas for designing ui.
291
290
292
Disclaimer: This toolkit is only for research purpose and not approved for clinical use.
291
Disclaimer: This toolkit is only for research purpose and not approved for clinical use.
293
292