Switch to unified view

a/README.md b/README.md
1
# 3D Teeth Reconstruction from CT Scans
1
# 3D Teeth Reconstruction from CT Scans
2
2
3
This is the code for Computer Graphics course project in 2018 Fall to conduct 3D teeth reconstruction from CT scans, maintained by Kaiwen Zha and Han Xue.
3
This is the code for Computer Graphics course project in 2018 Fall to conduct 3D teeth reconstruction from CT scans, maintained by Kaiwen Zha and Han Xue.
4
4
5
## Overview
5
## Overview
6
  [3D Teeth Reconstruction from CT Scans](#3d-teeth-reconstruction-from-ct-scans)
6
  [3D Teeth Reconstruction from CT Scans](#3d-teeth-reconstruction-from-ct-scans)
7
  - [Overview](#overview)
7
  - [Overview](#overview)
8
  - [Dataset Annotation](#dataset-annotation)
8
  - [Dataset Annotation](#dataset-annotation)
9
  - [SegNet](#segnet)
9
  - [SegNet](#segnet)
10
    - [Dependencies](#dependencies)
10
    - [Dependencies](#dependencies)
11
    - [Preparation](#preparation)
11
    - [Preparation](#preparation)
12
    - [Training Phase](#training-phase)
12
    - [Training Phase](#training-phase)
13
    - [Evaluating Phase](#evaluating-phase)
13
    - [Evaluating Phase](#evaluating-phase)
14
    - [Qualitative Results](#qualitative-results)
14
    - [Qualitative Results](#qualitative-results)
15
  - [RefineNet](#refinenet)
15
  - [RefineNet](#refinenet)
16
    - [Dependencies](#dependencies-1)
16
    - [Dependencies](#dependencies-1)
17
    - [Preparation](#preparation-1)
17
    - [Preparation](#preparation-1)
18
    - [Training Phase](#training-phase-1)
18
    - [Training Phase](#training-phase-1)
19
    - [Evaluating Phase](#evaluating-phase-1)
19
    - [Evaluating Phase](#evaluating-phase-1)
20
    - [Qualitative Results](#qualitative-results-1)
20
    - [Qualitative Results](#qualitative-results-1)
21
  - [SESNet](#sesnet)
21
  - [SESNet](#sesnet)
22
    - [Dependencies](#dependencies-2)
22
    - [Dependencies](#dependencies-2)
23
    - [Preparation](#preparation-2)
23
    - [Preparation](#preparation-2)
24
    - [Training Phase](#training-phase-2)
24
    - [Training Phase](#training-phase-2)
25
    - [Evaluating Phase](#evaluating-phase-2)
25
    - [Evaluating Phase](#evaluating-phase-2)
26
    - [Quantitative Results](#quantitative-results)
26
    - [Quantitative Results](#quantitative-results)
27
    - [Qualitative Results](#qualitative-results-2)
27
    - [Qualitative Results](#qualitative-results-2)
28
  - [3D Reconstruction](#3d-reconstruction)
28
  - [3D Reconstruction](#3d-reconstruction)
29
    - [Image Denoising](#image-denoising)
29
    - [Image Denoising](#image-denoising)
30
    - [Image Interpolation](#image-interpolation)
30
    - [Image Interpolation](#image-interpolation)
31
    - [3D Visualization](#3d-visualization)
31
    - [3D Visualization](#3d-visualization)
32
    - [Demonstration](#demonstration)
32
    - [Demonstration](#demonstration)
33
  - [Contributors](#contributors)
33
  - [Contributors](#contributors)
34
  - [Acknowledgement](#acknowledgement)
34
  - [Acknowledgement](#acknowledgement)
35
35
36
## Dataset Annotation
36
## Dataset Annotation
37
37
38
Since our given dataset only contains raw CT scan images, we manually annotate the segmentations of 500 images using [js-segment-annotator](https://github.com/kyamagu/js-segment-annotator). You can download our dataset from [here](https://jbox.sjtu.edu.cn/l/R092lj).
38
Since our given dataset only contains raw CT scan images, we manually annotate the segmentations of 500 images using [js-segment-annotator](https://github.com/kyamagu/js-segment-annotator). You can download our dataset from [here](https://jbox.sjtu.edu.cn/l/R092lj).
39
39
40
## SegNet
40
## SegNet
41
41
42
We use [SegNet](http://mi.eng.cam.ac.uk/projects/segnet/) as one of our base networks to conduct segmentation. We feed our annotated training data and train the network end-to-end, and evaluate it on our annotated testing data. 
42
We use [SegNet](http://mi.eng.cam.ac.uk/projects/segnet/) as one of our base networks to conduct segmentation. We feed our annotated training data and train the network end-to-end, and evaluate it on our annotated testing data. 
43
43
44
![segnet](./doc/segnet.jpg)
44
![segnet](./doc/segnet.jpg)
45
45
46
### Dependencies
46
### Dependencies
47
47
48
- Python 3.6
48
- Python 3.6
49
- TensorFlow 1.5.0
49
- TensorFlow 1.5.0
50
- Numpy
50
- Numpy
51
- Scipy
51
- Scipy
52
52
53
### Preparation
53
### Preparation
54
54
55
- Download our pretrained models from [here](https://jbox.sjtu.edu.cn/l/AJTFJt).
55
- Download our pretrained models from [here](https://jbox.sjtu.edu.cn/l/AJTFJt).
56
- Put the downloaded dataset /Images and /Labels into folder /Data/Training and /Data/Test.
56
- Put the downloaded dataset /Images and /Labels into folder /Data/Training and /Data/Test.
57
- Put our pretrained models /Run_new into folder /Output.
57
- Put our pretrained models /Run_new into folder /Output.
58
58
59
### Training Phase
59
### Training Phase
60
60
61
- Run the training phase
61
- Run the training phase
62
62
63
```bash
63
```bash
64
python train.py
64
python train.py
65
```
65
```
66
66
67
- Start tensorboard in another terminal
67
- Start tensorboard in another terminal
68
68
69
```bash
69
```bash
70
tensorboard --logdir ./Output --port [port_id]
70
tensorboard --logdir ./Output --port [port_id]
71
```
71
```
72
72
73
### Evaluating Phase
73
### Evaluating Phase
74
74
75
- Run the evaluating phase
75
- Run the evaluating phase
76
76
77
```bash
77
```bash
78
python test.py
78
python test.py
79
```
79
```
80
80
81
Note that you should have pretrained models /Run_new on folder /Output, and the predicted segmentations will be located in folder /Output/Run_new/Image_Output.
81
Note that you should have pretrained models /Run_new on folder /Output, and the predicted segmentations will be located in folder /Output/Run_new/Image_Output.
82
82
83
### Qualitative Results
83
### Qualitative Results
84
84
85
<div align="center">
85
<div align="center">
86
    <img src="./doc/raw.png" width="300">
86
    <img src="https://github.com/kaiwenzha/3D-Teeth-Reconstruction-from-CT-Scans/blob/master/doc/raw.png?raw=true" width="300">
87
    <img src="./doc/segnet_res.png" width="300">
87
    <img src="https://github.com/kaiwenzha/3D-Teeth-Reconstruction-from-CT-Scans/blob/master/doc/segnet_res.png?raw=true" width="300">
88
</div>
88
</div>
89
89
90
## RefineNet
90
## RefineNet
91
91
92
We also adopt [RefineNet](https://github.com/guosheng/refinenet) as our another base network to conduct segmentation. Here, we also use our annotated data to train and evaluate, and the code is finished by us from scratch, which implements with **two interfaces for RefineNet and SESNet respectively**.
92
We also adopt [RefineNet](https://github.com/guosheng/refinenet) as our another base network to conduct segmentation. Here, we also use our annotated data to train and evaluate, and the code is finished by us from scratch, which implements with **two interfaces for RefineNet and SESNet respectively**.
93
93
94
![refinenet](./doc/refinenet.png)
94
![refinenet](https://github.com/kaiwenzha/3D-Teeth-Reconstruction-from-CT-Scans/blob/master/doc/refinenet.png?raw=true)
95
95
96
### Dependencies
96
### Dependencies
97
97
98
- Python 2.7
98
- Python 2.7
99
99
100
- TensorFlow 1.8.0
100
- TensorFlow 1.8.0
101
101
102
- Numpy
102
- Numpy
103
103
104
- OpenCV
104
- OpenCV
105
105
106
- Pillow
106
- Pillow
107
107
108
- Matplotlib
108
- Matplotlib
109
109
110
### Preparation
110
### Preparation
111
111
112
- Download the checkpoint for ResNet backbone, TFRecords for training and testing data, generated color map, and our pretrained models from [here](https://jbox.sjtu.edu.cn/l/yo7MaQ).
112
- Download the checkpoint for ResNet backbone, TFRecords for training and testing data, generated color map, and our pretrained models from [here](https://jbox.sjtu.edu.cn/l/yo7MaQ).
113
- Put the downloaded dataset /images and /labels into folder /data.
113
- Put the downloaded dataset /images and /labels into folder /data.
114
- Put the ResNet checkpoint `resnet_v1_101.ckpt`, color map `color_map`, training and testing TFRecords `train.tfrecords` and `test.tfrecords` into folder /data.
114
- Put the ResNet checkpoint `resnet_v1_101.ckpt`, color map `color_map`, training and testing TFRecords `train.tfrecords` and `test.tfrecords` into folder /data.
115
- Put our pretrained models into folder /checkpoints.
115
- Put our pretrained models into folder /checkpoints.
116
116
117
### Training Phase
117
### Training Phase
118
118
119
- If you have not downloaded the TFRecords, convert training and testing data into TFRecords by running
119
- If you have not downloaded the TFRecords, convert training and testing data into TFRecords by running
120
120
121
```bash
121
```bash
122
python convert_teeth_to_tfrecords.py
122
python convert_teeth_to_tfrecords.py
123
```
123
```
124
124
125
- If you have not downloaded the color map, produce color map by running
125
- If you have not downloaded the color map, produce color map by running
126
126
127
```bash
127
```bash
128
python build_color_map.py
128
python build_color_map.py
129
```
129
```
130
130
131
- Run the training phase **(enable multi-gpu training by tower loss)**
131
- Run the training phase **(enable multi-gpu training by tower loss)**
132
132
133
```bash
133
```bash
134
python SESNet/multi_gpu_train.py --model_type refinenet
134
python SESNet/multi_gpu_train.py --model_type refinenet
135
```
135
```
136
136
137
Note that you can assign other command line parameters for training, like `batch_size`, `learning_rate`, `gpu_list` and so on.
137
Note that you can assign other command line parameters for training, like `batch_size`, `learning_rate`, `gpu_list` and so on.
138
138
139
### Evaluating Phase
139
### Evaluating Phase
140
140
141
- Run the evaluating phase
141
- Run the evaluating phase
142
142
143
```bash
143
```bash
144
python SESNet/test.py --model_type refinenet --result_path ref_result
144
python SESNet/test.py --model_type refinenet --result_path ref_result
145
```
145
```
146
146
147
Note that you should have trained models on folder /checkpoints, and the predicted segmentations will be located in folder /ref_result.
147
Note that you should have trained models on folder /checkpoints, and the predicted segmentations will be located in folder /ref_result.
148
148
149
### Qualitative Results
149
### Qualitative Results
150
150
151
<div align="center">
151
<div align="center">
152
    <img src="./doc/raw.png" width="300">
152
    <img src="https://github.com/kaiwenzha/3D-Teeth-Reconstruction-from-CT-Scans/blob/master/doc/raw.png?raw=true" width="300">
153
    <img src="./doc/refinenet_res.png" width="300">
153
    <img src="https://github.com/kaiwenzha/3D-Teeth-Reconstruction-from-CT-Scans/blob/master/doc/refinenet_res.png?raw=true" width="300">
154
</div>
154
</div>
155
155
156
## SESNet
156
## SESNet
157
157
158
This architecture is proposed by us. First, we use base networks (SegNet or RefineNet) to predict rough segmentations. Then, use **Shape Coherence Module (SCM)**, composed by 2-layer ConvLSTM, to learn the fluctuations of shapes to improve accuracy.
158
This architecture is proposed by us. First, we use base networks (SegNet or RefineNet) to predict rough segmentations. Then, use **Shape Coherence Module (SCM)**, composed by 2-layer ConvLSTM, to learn the fluctuations of shapes to improve accuracy.
159
159
160
![sesnet](./doc/sesnet.jpg)
160
![sesnet](./doc/sesnet.jpg)
161
161
162
### Dependencies
162
### Dependencies
163
163
164
The same as the dependencies of RefineNet.
164
The same as the dependencies of RefineNet.
165
165
166
### Preparation
166
### Preparation
167
167
168
The same as in RefineNet.
168
The same as in RefineNet.
169
169
170
### Training Phase
170
### Training Phase
171
171
172
- Generate TFRecords for training and testing, and color map as in RefineNet does.
172
- Generate TFRecords for training and testing, and color map as in RefineNet does.
173
- Run the training phase **(enable multi-gpu training by tower loss)**
173
- Run the training phase **(enable multi-gpu training by tower loss)**
174
```bash
174
```bash
175
python SESNet/multi_gpu_train.py --model_type sesnet
175
python SESNet/multi_gpu_train.py --model_type sesnet
176
```
176
```
177
Note that you can assign other command line parameters for training as well, like `batch_size`, `learning_rate`, `gpu_list` and so on.
177
Note that you can assign other command line parameters for training as well, like `batch_size`, `learning_rate`, `gpu_list` and so on.
178
178
179
### Evaluating Phase
179
### Evaluating Phase
180
180
181
- Run the evaluating phase
181
- Run the evaluating phase
182
182
183
```bash
183
```bash
184
python SESNet/test.py --model_type sesnet --result_path ses_result
184
python SESNet/test.py --model_type sesnet --result_path ses_result
185
```
185
```
186
186
187
Note that you should have trained models on folder /checkpoints, and the predicted segmentations will be located in folder /ses_result.
187
Note that you should have trained models on folder /checkpoints, and the predicted segmentations will be located in folder /ses_result.
188
188
189
### Quantitative Results
189
### Quantitative Results
190
190
191
|           | Pixel Accuracy |   IoU    |
191
|           | Pixel Accuracy |   IoU    |
192
| :-------: | :------------: | :------: |
192
| :-------: | :------------: | :------: |
193
|  SegNet   |      92.6      |   71.2   |
193
|  SegNet   |      92.6      |   71.2   |
194
| RefineNet |      99.6      |   78.3   |
194
| RefineNet |      99.6      |   78.3   |
195
|  SESNet   |    **99.7**    | **82.6** |
195
|  SESNet   |    **99.7**    | **82.6** |
196
196
197
### Qualitative Results
197
### Qualitative Results
198
198
199
![compare](./doc/compare.jpg)
199
![compare](./doc/compare.jpg)
200
200
201
## 3D Reconstruction
201
## 3D Reconstruction
202
202
203
### Image Denoising
203
### Image Denoising
204
204
205
We use **Morphology-based Smoothing** by combining *erosion* and *dilation* operations.![denoising](doc/denoising.png)
205
We use **Morphology-based Smoothing** by combining *erosion* and *dilation* operations.![denoising](https://github.com/kaiwenzha/3D-Teeth-Reconstruction-from-CT-Scans/blob/master/doc/denoising.png?raw=true)
206
206
207
- We implemented it with MATLAB. Begin denoising by running
207
- We implemented it with MATLAB. Begin denoising by running
208
208
209
```matlab
209
```matlab
210
denoising(inputDir, outputDir, thresh);
210
denoising(inputDir, outputDir, thresh);
211
```
211
```
212
212
213
Note that `inputDir` is the folder for input PNG images, `outputDir` is the folder for output PNG images, and `thresh` is the threshold for binarizing (0~255).
213
Note that `inputDir` is the folder for input PNG images, `outputDir` is the folder for output PNG images, and `thresh` is the threshold for binarizing (0~255).
214
214
215
### Image Interpolation
215
### Image Interpolation
216
216
217
In order to reduce the gaps between different layers, we use interpolation between 2D images to increase the depth of 3D volumetric intensity images. 
217
In order to reduce the gaps between different layers, we use interpolation between 2D images to increase the depth of 3D volumetric intensity images. 
218
218
219
- We implemented it with MATLAB. Begin interpolating by running
219
- We implemented it with MATLAB. Begin interpolating by running
220
220
221
```matlab
221
```matlab
222
interpolate(inputDir, outputDir, new_depth, method)
222
interpolate(inputDir, outputDir, new_depth, method)
223
```
223
```
224
224
225
Note that `inputDir` is the folder for input PNG images, `outputDir` is the folder for output PNG images, `new_depth` is the new depth for 3-D volumetric intensity image, and `method` is the method for interpolation, which could be `linear', 'cubic', 'box', 'lanczos2' or 'lanczos3'.
225
Note that `inputDir` is the folder for input PNG images, `outputDir` is the folder for output PNG images, `new_depth` is the new depth for 3-D volumetric intensity image, and `method` is the method for interpolation, which could be `linear', 'cubic', 'box', 'lanczos2' or 'lanczos3'.
226
226
227
- Convert continuous PNG files to RAW format (volume data) by running
227
- Convert continuous PNG files to RAW format (volume data) by running
228
228
229
```bash
229
```bash
230
python png2raw.py -in input_dir -out output_dir
230
python png2raw.py -in input_dir -out output_dir
231
```
231
```
232
232
233
### 3D Visualization
233
### 3D Visualization
234
234
235
We use **surface rendering** for 3D Visualization. We adopt [Dual Marching Cubes](https://dl.acm.org/citation.cfm?id=1034484) algorithm to convert 3D volumetric intensity image into surface representation. This implementation requires C++11 and needs no other dependencies.
235
We use **surface rendering** for 3D Visualization. We adopt [Dual Marching Cubes](https://dl.acm.org/citation.cfm?id=1034484) algorithm to convert 3D volumetric intensity image into surface representation. This implementation requires C++11 and needs no other dependencies.
236
236
237
- To build the application and see the available options in a Linux environment type
237
- To build the application and see the available options in a Linux environment type
238
238
239
```bash
239
```bash
240
$ make
240
$ make
241
$ ./dmc -help
241
$ ./dmc -help
242
```
242
```
243
243
244
A basic CMAKE file is provided as well.
244
A basic CMAKE file is provided as well.
245
245
246
- Extract a surface (OBJ format) from the volume type (RAW format) by running
246
- Extract a surface (OBJ format) from the volume type (RAW format) by running
247
247
248
```bash
248
```bash
249
$ ./dmc -raw FILE X Y Z
249
$ ./dmc -raw FILE X Y Z
250
```
250
```
251
251
252
Note that `X, Y, Z` are used to specify RAW file with dimensions.
252
Note that `X, Y, Z` are used to specify RAW file with dimensions.
253
253
254
- Display OBJ file by using software such as 3D Model Viewer or using our provided MATLAB function
254
- Display OBJ file by using software such as 3D Model Viewer or using our provided MATLAB function
255
255
256
```matlab
256
```matlab
257
obj_display(input_file_name);
257
obj_display(input_file_name);
258
```
258
```
259
259
260
### Demonstration
260
### Demonstration
261
261
262
![rec](./doc/rec.jpg)
262
![rec](./doc/rec.jpg)
263
263
264
## Contributors
264
## Contributors
265
265
266
This repo is maintained by [Kaiwen Zha](https://github.com/KaiwenZha), and [Han Xue](https://github.com/xiaoxiaoxh).
266
This repo is maintained by [Kaiwen Zha](https://github.com/KaiwenZha), and [Han Xue](https://github.com/xiaoxiaoxh).
267
267
268
## Acknowledgement
268
## Acknowledgement
269
269
270
Special thanks for the guidance of Prof. Bin Sheng and TA. Xiaoshuang Li.
270
Special thanks for the guidance of Prof. Bin Sheng and TA. Xiaoshuang Li.