|
a |
|
b/README.md |
|
|
1 |
# OrganSegC2F: a coarse-to-fine organ segmentation framework |
|
|
2 |
version 1.11 - Dec 3 2017 - by Yuyin Zhou and Lingxi Xie |
|
|
3 |
|
|
|
4 |
### Please note: an improved version of OrganSegC2F named OrganSegRSTN is available: https://github.com/198808xc/OrganSegRSTN |
|
|
5 |
|
|
|
6 |
It outperforms OrganSegC2F by 84.50% vs. 82.37% on the NIH pancreas segmentation dataset. |
|
|
7 |
|
|
|
8 |
#### Also NOTE: some functions have been optimized in the repository OrganSegRSTN, but not yet been transferred here. |
|
|
9 |
I will do these things in the near future - they do not impact performance, but will make the testing process MUCH faster. |
|
|
10 |
|
|
|
11 |
|
|
|
12 |
#### **Yuyin Zhou is the main contributor to this repository.** |
|
|
13 |
|
|
|
14 |
Yuyin Zhou proposed the algorithm, created the framework and implemented main functions. |
|
|
15 |
Lingxi Xie later wrapped up these codes for release. |
|
|
16 |
|
|
|
17 |
#### If you use our codes, please cite our paper accordingly: |
|
|
18 |
|
|
|
19 |
**Yuyin Zhou**, Lingxi Xie, Wei Shen, Yan Wang, Elliot K. Fishman, Alan L. Yuille, |
|
|
20 |
"A Fixed-Point Model for Pancreas Segmentation in Abdominal CT Scans", |
|
|
21 |
in International Conference on MICCAI, Quebec City, Quebec, Canada, 2017. |
|
|
22 |
|
|
|
23 |
https://arxiv.org/abs/1612.08230 |
|
|
24 |
|
|
|
25 |
http://lingxixie.com/Projects/OrganSegC2F.html |
|
|
26 |
|
|
|
27 |
All the materials released in this library can **ONLY** be used for **RESEARCH** purposes. |
|
|
28 |
|
|
|
29 |
The authors and their institution (JHU/JHMI) preserve the copyright and all legal rights of these codes. |
|
|
30 |
|
|
|
31 |
**Before you start, please note that there is a LAZY MODE, |
|
|
32 |
which allows you to run the entire framework with ONE click. |
|
|
33 |
Check the contents before Section 4.3 for details.** |
|
|
34 |
|
|
|
35 |
|
|
|
36 |
## 1. Introduction |
|
|
37 |
|
|
|
38 |
OrganSegC2F is a code package for our paper: |
|
|
39 |
Yuyin Zhou, Lingxi Xie, Wei Shen, Yan Wang, Elliot K. Fishman, Alan L. Yuille, |
|
|
40 |
"A Fixed-Point Model for Pancreas Segmentation in Abdominal CT Scans", |
|
|
41 |
in International Conference on MICCAI, Quebec City, Quebec, 2017. |
|
|
42 |
|
|
|
43 |
OrganSegC2F is a segmentation framework designed for 3D volumes. |
|
|
44 |
It was originally designed for segmenting abdominal organs in CT scans, |
|
|
45 |
but we believe that it can also be used for other purposes, |
|
|
46 |
such as brain tissue segmentation in fMRI-scanned images. |
|
|
47 |
|
|
|
48 |
OrganSegC2F is based on the state-of-the-art deep learning techniques. |
|
|
49 |
This code package is to be used with CAFFE, a deep learning library. |
|
|
50 |
We make use of the python interface of CAFFE, named pyCAFFE. |
|
|
51 |
|
|
|
52 |
It is highly recommended to use one or more modern GPUs for computation. |
|
|
53 |
Using CPUs will take at least 50x more time in computation. |
|
|
54 |
|
|
|
55 |
|
|
|
56 |
## 2. File List |
|
|
57 |
|
|
|
58 |
| Folder/File | Description | |
|
|
59 |
|:-------------------------- |:---------------------------------------------------- | |
|
|
60 |
| `README.txt` | the README file | |
|
|
61 |
| | | |
|
|
62 |
| **DATA2NPY/** | codes to transfer the NIH dataset into NPY format | |
|
|
63 |
| `dicom2npy.py` | transferring image data (DICOM) into NPY format | |
|
|
64 |
| `nii2npy.py` | transferring label data (NII) into NPY format | |
|
|
65 |
| | | |
|
|
66 |
| **DiceLossLayer/** | CPU implementation of the Dice loss layer | |
|
|
67 |
| `dice_loss_layer.hpp` | the header file | |
|
|
68 |
| `dice_loss_layer.cpp` | the CPU implementation | |
|
|
69 |
| | | |
|
|
70 |
| **OrganSegC2F/** | primary codes of OrganSegC2F | |
|
|
71 |
| `coarse2fine_testing.py` | the coarse-to-fine testing process | |
|
|
72 |
| `coarse_fusion.py` | the coarse-scaled fusion process | |
|
|
73 |
| `coarse_surgery.py` | the surgery function for coarse-scaled training | |
|
|
74 |
| `coarse_testing.py` | the coarse-scaled testing process | |
|
|
75 |
| `coarse_training.py` | the coarse-scaled training process | |
|
|
76 |
| `DataC.py` | the data layer in the coarse-scaled training | |
|
|
77 |
| `DataF.py` | the data layer in the fine-scaled training | |
|
|
78 |
| `fine_surgery.py` | the surgery function for fine-scaled training | |
|
|
79 |
| `fine_training.py` | the fine-scaled training process | |
|
|
80 |
| `init.py` | the initialization functions | |
|
|
81 |
| `oracle_fusion.py` | the fusion process with oracle information | |
|
|
82 |
| `oracle_testing.py` | the testing process with oracle information | |
|
|
83 |
| `run.sh` | the main program to be called in bash shell | |
|
|
84 |
| `utils.py` | the common functions | |
|
|
85 |
| | | |
|
|
86 |
| **OrganSegC2F/prototxts** | primary codes of OrganSegC2F | |
|
|
87 |
| `deploy_1.prototxt` | the prototxt file for 1-slice testing | |
|
|
88 |
| `deploy_3.prototxt` | the prototxt file for 3-slice testing | |
|
|
89 |
| `training_C1.prototxt` | the prototxt file for 1-slice coarse-scaled training | |
|
|
90 |
| `training_C3.prototxt` | the prototxt file for 3-slice coarse-scaled training | |
|
|
91 |
| `training_F1.prototxt` | the prototxt file for 1-slice fine-scaled training | |
|
|
92 |
| `training_F3.prototxt` | the prototxt file for 3-slice fine-scaled training | |
|
|
93 |
|
|
|
94 |
|
|
|
95 |
## 3. Installation |
|
|
96 |
|
|
|
97 |
|
|
|
98 |
#### 3.1 Prerequisites |
|
|
99 |
|
|
|
100 |
###### 3.1.1 Please make sure that your computer is equipped with modern GPUs that support CUDA. |
|
|
101 |
Without them, you will need 50x more time in both training and testing stages. |
|
|
102 |
|
|
|
103 |
###### 3.1.2 Please also make sure that python (we are using 2.7) is installed. |
|
|
104 |
|
|
|
105 |
|
|
|
106 |
#### 3.2 CAFFE and pyCAFFE |
|
|
107 |
|
|
|
108 |
###### 3.2.1 Download a CAFFE library from http://caffe.berkeleyvision.org/ . |
|
|
109 |
Suppose your CAFFE root directory is $CAFFE_PATH. |
|
|
110 |
|
|
|
111 |
###### 3.2.2 Place the files of Dice loss layer at the correct position. |
|
|
112 |
dice_loss_layer.hpp -> $CAFFE_PATH/include/caffe/layers/ |
|
|
113 |
dice_loss_layer.cpp -> $CAFFE_PATH/src/caffe/layers/ |
|
|
114 |
|
|
|
115 |
###### 3.2.3 Make CAFFE and pyCAFFE. |
|
|
116 |
|
|
|
117 |
|
|
|
118 |
## 4. Usage |
|
|
119 |
|
|
|
120 |
Please follow these steps to reproduce our results on the NIH pancreas segmentation dataset. |
|
|
121 |
|
|
|
122 |
**NOTE**: Here we only provide basic steps to run our codes on the NIH dataset. |
|
|
123 |
For more detailed analysis and empirical guidelines for parameter setting |
|
|
124 |
(this is very important especially when you are using our codes on other datasets), |
|
|
125 |
please refer to our technical report (check our webpage for updates). |
|
|
126 |
|
|
|
127 |
|
|
|
128 |
#### 4.1 Data preparation |
|
|
129 |
|
|
|
130 |
###### 4.1.1 Download NIH data from https://wiki.cancerimagingarchive.net/display/Public/Pancreas-CT . |
|
|
131 |
You should be able to download image and label data individually. |
|
|
132 |
Suppose your data directory is $RAW_PATH: |
|
|
133 |
The image data are organized as $RAW_PATH/DOI/PANCREAS_00XX/A_LONG_CODE/A_LONG_CODE/ . |
|
|
134 |
The label data are organized as $RAW_PATH/TCIA_pancreas_labels-TIMESTAMP/label00XX.nii.gz . |
|
|
135 |
|
|
|
136 |
###### 4.1.2 Use our codes to transfer these data into NPY format. |
|
|
137 |
Put dicom2npy.py under $RAW_PATH, and run: python dicom2npy.py . |
|
|
138 |
The transferred data should be put under $RAW_PATH/images/ |
|
|
139 |
Put nii2npy.py under $RAW_PATH, and run: python nii2npy.py . |
|
|
140 |
The transferred data should be put under $RAW_PATH/labels/ |
|
|
141 |
|
|
|
142 |
###### 4.1.3 Suppose your directory to store experimental data is $DATA_PATH: |
|
|
143 |
Put $CAFFE_PATH under $DATA_PATH/libs/ |
|
|
144 |
Put images/ under $DATA_PATH/ |
|
|
145 |
Put labels/ under $DATA_PATH/ |
|
|
146 |
|
|
|
147 |
NOTE: If you use other path(s), please modify the variable(s) in run.sh accordingly. |
|
|
148 |
|
|
|
149 |
|
|
|
150 |
#### 4.2 Initialization (requires: 4.1) |
|
|
151 |
|
|
|
152 |
###### 4.2.1 Check run.sh and set $DATA_PATH accordingly. |
|
|
153 |
|
|
|
154 |
###### 4.2.2 Set $ENABLE_INITIALIZATION=1 and run this script. |
|
|
155 |
Several folders will be created under $DATA_PATH: |
|
|
156 |
$DATA_PATH/images_X|Y|Z: the sliced image data (data are sliced for faster I/O). |
|
|
157 |
$DATA_PATH/labels_X|Y|Z: the sliced label data (data are sliced for faster I/O). |
|
|
158 |
$DATA_PATH/lists: used for storing training, testing and slice lists. |
|
|
159 |
$DATA_PATH/logs: used for storing log files during the training process. |
|
|
160 |
$DATA_PATH/models: used for storing models (snapshots) during the training process. |
|
|
161 |
$DATA_PATH/prototxts: used for storing prototxts (called by training and testing nets). |
|
|
162 |
$DATA_PATH/results: used for storing testing results (volumes and text results). |
|
|
163 |
According to the I/O speed of your hard drive, the time cost may vary. |
|
|
164 |
For a typical HDD, around 20 seconds are required for a 512x512x300 volume. |
|
|
165 |
This process needs to be executed only once. |
|
|
166 |
|
|
|
167 |
NOTE: if you are using another dataset which contains multiple targets, |
|
|
168 |
you can modify the variables "ORGAN_NUMBER" and "ORGAN_ID" in run.sh, |
|
|
169 |
as well as the "is_organ" function in utils.py to define your mapping function flexibly. |
|
|
170 |
|
|
|
171 |
|
|
|
172 |
 |
|
|
173 |
**LAZY MODE!** |
|
|
174 |
 |
|
|
175 |
|
|
|
176 |
You can run all the following modules with **one** execution! |
|
|
177 |
* a) Enable everything (except initialization) in the beginning part. |
|
|
178 |
* b) Set all the "PLANE" variables as "A" (4 in total) in the following part. |
|
|
179 |
* c) Run this manuscript! |
|
|
180 |
|
|
|
181 |
|
|
|
182 |
#### 4.3 Coarse-scaled training (requires: 4.2) |
|
|
183 |
|
|
|
184 |
###### 4.3.1 Check run.sh and set $COARSE_TRAINING_PLANE and $COARSE_TRAINING_GPU. |
|
|
185 |
You need to run X|Y|Z planes individually, so you can use 3 GPUs in parallel. |
|
|
186 |
You can also set COARSE_TRAINING_PLANE=A, so that three planes are trained orderly in one GPU. |
|
|
187 |
|
|
|
188 |
###### 4.3.2 Set $ENABLE_COARSE_TRAINING=1 and run this script. |
|
|
189 |
The following folders/files will be created: |
|
|
190 |
Under $DATA_PATH/logs/, a log file named by training information. |
|
|
191 |
Under $DATA_PATH/models/snapshots/, a folder named by training information. |
|
|
192 |
Snapshots and solver-states will be stored in this folder. |
|
|
193 |
The log file will also be copied into this folder after the entire training process. |
|
|
194 |
On the axial view (training image size is 512x512, small input images make training faster), |
|
|
195 |
each 20 iterations cost ~8s on a Titan-X Maxwell GPU, or ~5s on a Titan-X Pascal GPU. |
|
|
196 |
As described in the paper, we need ~80K iterations, which take less than 10 GPU-hours. |
|
|
197 |
After the training process, the log file will be copied to the snapshot directory. |
|
|
198 |
|
|
|
199 |
|
|
|
200 |
#### 4.4 Coarse-scaled testing (requires: 4.3) |
|
|
201 |
|
|
|
202 |
###### 4.4.1 Check run.sh and set $COARSE_TESTING_PLANE and $COARSE_TESTING_GPU. |
|
|
203 |
You need to run X|Y|Z planes individually, so you can use 3 GPUs in parallel. |
|
|
204 |
You can also set COARSE_TESTING_PLANE=A, so that three planes are tested orderly in one GPU. |
|
|
205 |
|
|
|
206 |
###### 4.4.2 Set $ENABLE_COARSE_TESTING=1 and run this script. |
|
|
207 |
The following folder will be created: |
|
|
208 |
Under $DATA_PATH/results/, a folder named by training information. |
|
|
209 |
Testing each volume costs ~20 seconds on a Titan-X Maxwell GPU, or ~13s on a Titan-X Pascal GPU. |
|
|
210 |
|
|
|
211 |
|
|
|
212 |
#### 4.5 Coarse-scaled fusion (requires: 4.4) |
|
|
213 |
|
|
|
214 |
###### 4.5.1 Fusion is perfomed on CPU and all X|Y|Z planes are combined and executed once. |
|
|
215 |
|
|
|
216 |
###### 4.5.2 Set $ENABLE_COARSE_FUSION=1 and run this script. |
|
|
217 |
The following folder will be created: |
|
|
218 |
Under $DATA_PATH/results/, a folder named by fusion information. |
|
|
219 |
The main cost in fusion includes I/O and post-processing (removing non-maximum components). |
|
|
220 |
In our future release, we will implement post-processing in C for acceleration. |
|
|
221 |
|
|
|
222 |
|
|
|
223 |
#### 4.6 Fine-scaled training (requires: 4.2) |
|
|
224 |
|
|
|
225 |
###### 4.6.1 Check run.sh and set $FINE_TRAINING_PLANE and $FINE_TRAINING_GPU. |
|
|
226 |
You need to run X|Y|Z planes individually, so you can use 3 GPUs in parallel. |
|
|
227 |
You can also set FINE_TRAINING_PLANE=A, so that three planes are trained orderly in one GPU. |
|
|
228 |
|
|
|
229 |
###### 4.6.2 Set $ENABLE_FINE_TRAINING=1 and run this script. |
|
|
230 |
The following folders/files will be created: |
|
|
231 |
Under $DATA_PATH/logs/, a log file named by training information. |
|
|
232 |
Under $DATA_PATH/models/snapshots/, a folder named by training information. |
|
|
233 |
Snapshots and solver-states will be stored in this folder. |
|
|
234 |
The log file will also be copied into this folder after the entire training process. |
|
|
235 |
On the axial view (training image size is ~150x150, small input images make training faster), |
|
|
236 |
each 20 iterations cost ~5s on a Titan-X Maxwell GPU, or ~3s on a Titan-X Pascal GPU. |
|
|
237 |
As described in the paper, we need 60K iterations, which take less than 5 GPU-hours. |
|
|
238 |
After the training process, the log file will be copied to the snapshot directory. |
|
|
239 |
|
|
|
240 |
|
|
|
241 |
#### 4.7 Oracle testing (optional) (requires: 4.6) |
|
|
242 |
|
|
|
243 |
**NOTE**: Without this step, you can also run the coarse-to-fine testing process. |
|
|
244 |
This stage is still recommended, so that you can check the quality of the fine-scaled models. |
|
|
245 |
|
|
|
246 |
###### 4.7.1 Check run.sh and set $ORACLE_TESTING_PLANE and $ORACLE_TESTING_GPU. |
|
|
247 |
You need to run X|Y|Z planes individually, so you can use 3 GPUs in parallel. |
|
|
248 |
You can also set ORACLE_TESTING_PLANE=A, so that three planes are tested orderly in one GPU. |
|
|
249 |
|
|
|
250 |
###### 4.7.2 Set $ENABLE_ORACLE_TESTING=1 and run this script. |
|
|
251 |
The following folder will be created: |
|
|
252 |
Under $DATA_PATH/results/, a folder named by training information. |
|
|
253 |
Testing each volume costs ~5 seconds on a Titan-X Maxwell GPU, or ~3s on a Titan-X Pascal GPU. |
|
|
254 |
|
|
|
255 |
|
|
|
256 |
#### 4.8 Oracle fusion (optional) (requires: 4.7) |
|
|
257 |
|
|
|
258 |
**NOTE**: Without this step, you can also run the coarse-to-fine testing process. |
|
|
259 |
This stage is still recommended, so that you can check the quality of the fine-scaled models. |
|
|
260 |
|
|
|
261 |
###### 4.8.1 Fusion is perfomed on CPU and all X|Y|Z planes are combined and executed once. |
|
|
262 |
|
|
|
263 |
###### 4.8.2 Set $ENABLE_ORACLE_FUSION=1 and run this script. |
|
|
264 |
The following folder will be created: |
|
|
265 |
Under $DATA_PATH/results/, a folder named by fusion information. |
|
|
266 |
The main cost in fusion includes I/O and post-processing (removing non-maximum components). |
|
|
267 |
In our future release, we will implement post-processing in C for acceleration. |
|
|
268 |
|
|
|
269 |
|
|
|
270 |
#### 4.9 Coarse-to-fine testing (requires: 4.4 & 4.6) |
|
|
271 |
|
|
|
272 |
###### 4.9.1 Check run.sh and set $COARSE2FINE_TESTING_GPU. |
|
|
273 |
Fusion is performed on CPU and all X|Y|Z planes are combined. |
|
|
274 |
Currently X|Y|Z testing processes are executed with one GPU, but it is not time-comsuming. |
|
|
275 |
|
|
|
276 |
###### 4.9.2 Set $ENABLE_COARSE2FINE_TESTING=1 and run this script. |
|
|
277 |
The following folder will be created: |
|
|
278 |
Under $DATA_PATH/results/, a folder named by coarse-to-fine information (very long). |
|
|
279 |
This function calls both fine-scaled testing and fusion codes, so both GPU and CPU are used. |
|
|
280 |
In our future release, we will implement post-processing in C for acceleration. |
|
|
281 |
|
|
|
282 |
**NOTE**: currently we set the maximal rounds of iteration to be 10 in order to observe the convergence. |
|
|
283 |
Most often, it reaches an inter-DSC of >95% after 2-3 iterations. |
|
|
284 |
If you hope to save time, you can slight modify the codes in coarse2fine_testing.py. |
|
|
285 |
Each iteration takes ~20 seconds on a Titan-X Maxwell GPU, or ~15s on a Titan-X Pascal GPU. |
|
|
286 |
If you set the threshold to be 95%, this stage will be done within 1 minute (in average). |
|
|
287 |
|
|
|
288 |
|
|
|
289 |
Congratulations! You have finished the entire process. Check your results now! |
|
|
290 |
|
|
|
291 |
|
|
|
292 |
## 5. Pre-trained Models on the NIH Dataset |
|
|
293 |
|
|
|
294 |
**NOTE**: all these models were trained following our default settings. |
|
|
295 |
|
|
|
296 |
The 82 cases in the NIH dataset are split into 4 folds: |
|
|
297 |
* **Fold #0**: testing on Cases 01, 02, ..., 20; |
|
|
298 |
* **Fold #1**: testing on Cases 21, 22, ..., 40; |
|
|
299 |
* **Fold #2**: testing on Cases 41, 42, ..., 61; |
|
|
300 |
* **Fold #3**: testing on Cases 62, 63, ..., 82. |
|
|
301 |
|
|
|
302 |
We provided the coarse-scaled and fine-scaled models on each plane of each fold, in total 24 files. |
|
|
303 |
|
|
|
304 |
Each of these models is around 512MB, the same size as the pre-trained FCN model. |
|
|
305 |
* **Fold #0**: Coarse [[X]](https://drive.google.com/open?id=14FhLfolK8fHxe5Z8zPZeNUVZLBta5yaB) |
|
|
306 |
[[Y]](https://drive.google.com/open?id=1VHI3OhByrcYNrI7FnbBsoh86f_Jp0RLY) |
|
|
307 |
[[Z]](https://drive.google.com/open?id=17hNt545bRAlL6uHb-gyjx9MNgjfFrgs5) |
|
|
308 |
Fine [[X]](https://drive.google.com/open?id=1alVP2BiHtSsK8gO8WwlOPlZw1J9thymS) |
|
|
309 |
[[Y]](https://drive.google.com/open?id=1ad2DFtPK1cNZs784-csOx8l7i4M_mT3i) |
|
|
310 |
[[Z]](https://drive.google.com/open?id=1y0H7_0R5Dvnj90UoCgjKhoa5aOmDeZhW) |
|
|
311 |
(**Accuracy**: coarse 75.27%, oracle 84.97%, coarse-to-fine 83.65%) |
|
|
312 |
* **Fold #1**: Coarse [[X]](https://drive.google.com/open?id=1n0-1QE4ZebXts8aVQmjM6hkNPpgRCR_c) |
|
|
313 |
[[Y]](https://drive.google.com/open?id=1L4agEV7f-AWHrGWbmF1PJOIW2-JLxQ7s) |
|
|
314 |
[[Z]](https://drive.google.com/open?id=1VvrGJ_DNnKfEhRcaGSnFvDre3v-BVAXA) |
|
|
315 |
Fine [[X]](https://drive.google.com/open?id=1AMEtHhuWSLR--hFrJpSyaJDvA80nzBv9) |
|
|
316 |
[[Y]](https://drive.google.com/open?id=1jDv4xtzjI1KBG1BnkMhoh0j-YKMTe-aM) |
|
|
317 |
[[Z]](https://drive.google.com/open?id=1zGKzQgbH-oeoTGQRwORe2mOCMoWSvDU6) |
|
|
318 |
(**Accuracy**: coarse 75.20%, oracle 82.85%, coarse-to-fine 80.93%) |
|
|
319 |
* **Fold #2**: Coarse [[X]](https://drive.google.com/open?id=1qUo7zeHgU5fPFGFmHQfgQlhNaYqLfYpf) |
|
|
320 |
[[Y]](https://drive.google.com/open?id=13h7xXyJqgnm6Cd_LCXJmJEvEv2hwjPja) |
|
|
321 |
[[Z]](https://drive.google.com/open?id=1kxxFK1giZ2E7t4aZE_2sI1Eds3WaLWDj) |
|
|
322 |
Fine [[X]](https://drive.google.com/open?id=1iGrWxeq0m4Wj0Whq7zWVN6LpDLohCIhX) |
|
|
323 |
[[Y]](https://drive.google.com/open?id=1j5ohZKUXna2fSgp8yYCz4EVlhIMOsatT) |
|
|
324 |
[[Z]](https://drive.google.com/open?id=1Jro5uVjC0kYMyVapjtidIrQcPCRfwXil) |
|
|
325 |
(**Accuracy**: coarse 76.06%, oracle 84.00%, coarse-to-fine 82.20%) |
|
|
326 |
* **Fold #3**: Coarse [[X]](https://drive.google.com/open?id=1V7DT_mGbYhIChwiodYrVupdfVnetjdCD) |
|
|
327 |
[[Y]](https://drive.google.com/open?id=1APCiNZrMgQQaMyihZMosJvg9OWR9pXnj) |
|
|
328 |
[[Z]](https://drive.google.com/open?id=1xU8qz0vVaLvdm77xaPuQgQEJw4n0Psgl) |
|
|
329 |
Fine [[X]](https://drive.google.com/open?id=1eQyLEy-mVzGka2gV0rnCHCOJibL45IoU) |
|
|
330 |
[[Y]](https://drive.google.com/open?id=1oDUhUxOOSNQoF87McYyyOY6J5EOGdBU_) |
|
|
331 |
[[Z]](https://drive.google.com/open?id=1-UM5GnsmZ8beFu_3V3vLpDyaGTwWz1XE) |
|
|
332 |
(**Accuracy**: coarse 75.72%, oracle 84.26%, coarse-to-fine 82.91%) |
|
|
333 |
|
|
|
334 |
If you encounter any problems in downloading these files, please contact Lingxi Xie (198808xc@gmail.com). |
|
|
335 |
|
|
|
336 |
|
|
|
337 |
## 6. Versions |
|
|
338 |
|
|
|
339 |
The current version is v1.11. |
|
|
340 |
|
|
|
341 |
To access old versions, please visit our project page: http://lingxixie.com/Projects/OrganSegC2F.html. |
|
|
342 |
|
|
|
343 |
You can also view CHANGE_LOG.txt for the history of versions. |
|
|
344 |
|
|
|
345 |
|
|
|
346 |
## 7. Contact Information |
|
|
347 |
|
|
|
348 |
If you encounter any problems in using these codes, please open an issue in this repository. |
|
|
349 |
You may also contact Yuyin Zhou (zhouyuyiner@gmail.com) or Lingxi Xie (198808xc@gmail.com). |
|
|
350 |
|
|
|
351 |
Thanks for your interest! Have fun! |