This is the code for Computer Graphics course project in 2018 Fall to conduct 3D teeth reconstruction from CT scans, maintained by Kaiwen Zha and Han Xue.
3D Teeth Reconstruction from CT Scans
- Overview
- Dataset Annotation
- SegNet
- Dependencies
- Preparation
- Training Phase
- Evaluating Phase
- Qualitative Results
- RefineNet
- Dependencies
- Preparation
- Training Phase
- Evaluating Phase
- Qualitative Results
- SESNet
- Dependencies
- Preparation
- Training Phase
- Evaluating Phase
- Quantitative Results
- Qualitative Results
- 3D Reconstruction
- Image Denoising
- Image Interpolation
- 3D Visualization
- Demonstration
- Contributors
- Acknowledgement
Since our given dataset only contains raw CT scan images, we manually annotate the segmentations of 500 images using js-segment-annotator. You can download our dataset from here.
We use SegNet as one of our base networks to conduct segmentation. We feed our annotated training data and train the network end-to-end, and evaluate it on our annotated testing data.
python train.py
tensorboard --logdir ./Output --port [port_id]
python test.py
Note that you should have pretrained models /Run_new on folder /Output, and the predicted segmentations will be located in folder /Output/Run_new/Image_Output.
We also adopt RefineNet as our another base network to conduct segmentation. Here, we also use our annotated data to train and evaluate, and the code is finished by us from scratch, which implements with two interfaces for RefineNet and SESNet respectively.
Python 2.7
TensorFlow 1.8.0
Numpy
OpenCV
Pillow
Matplotlib
resnet_v1_101.ckpt
, color map color_map
, training and testing TFRecords train.tfrecords
and test.tfrecords
into folder /data.python convert_teeth_to_tfrecords.py
python build_color_map.py
python SESNet/multi_gpu_train.py --model_type refinenet
Note that you can assign other command line parameters for training, like batch_size
, learning_rate
, gpu_list
and so on.
python SESNet/test.py --model_type refinenet --result_path ref_result
Note that you should have trained models on folder /checkpoints, and the predicted segmentations will be located in folder /ref_result.
This architecture is proposed by us. First, we use base networks (SegNet or RefineNet) to predict rough segmentations. Then, use Shape Coherence Module (SCM), composed by 2-layer ConvLSTM, to learn the fluctuations of shapes to improve accuracy.
The same as the dependencies of RefineNet.
The same as in RefineNet.
python SESNet/multi_gpu_train.py --model_type sesnet
Note that you can assign other command line parameters for training as well, like batch_size
, learning_rate
, gpu_list
and so on.
python SESNet/test.py --model_type sesnet --result_path ses_result
Note that you should have trained models on folder /checkpoints, and the predicted segmentations will be located in folder /ses_result.
Pixel Accuracy | IoU | |
---|---|---|
SegNet | 92.6 | 71.2 |
RefineNet | 99.6 | 78.3 |
SESNet | 99.7 | 82.6 |
We use Morphology-based Smoothing by combining erosion and dilation operations.
denoising(inputDir, outputDir, thresh);
Note that inputDir
is the folder for input PNG images, outputDir
is the folder for output PNG images, and thresh
is the threshold for binarizing (0~255).
In order to reduce the gaps between different layers, we use interpolation between 2D images to increase the depth of 3D volumetric intensity images.
interpolate(inputDir, outputDir, new_depth, method)
Note that inputDir
is the folder for input PNG images, outputDir
is the folder for output PNG images, new_depth
is the new depth for 3-D volumetric intensity image, and method
is the method for interpolation, which could be `linear', 'cubic', 'box', 'lanczos2' or 'lanczos3'.
python png2raw.py -in input_dir -out output_dir
We use surface rendering for 3D Visualization. We adopt Dual Marching Cubes algorithm to convert 3D volumetric intensity image into surface representation. This implementation requires C++11 and needs no other dependencies.
$ make
$ ./dmc -help
A basic CMAKE file is provided as well.
$ ./dmc -raw FILE X Y Z
Note that X, Y, Z
are used to specify RAW file with dimensions.
obj_display(input_file_name);
This repo is maintained by Kaiwen Zha, and Han Xue.
Special thanks for the guidance of Prof. Bin Sheng and TA. Xiaoshuang Li.