|
a |
|
b/README.md |
|
|
1 |
# 3D Prostate Segmentation from MR Images using FCNN |
|
|
2 |
|
|
|
3 |
<img src="https://img.shields.io/github/stars/amanbasu/3d-prostate-segmentation?color=0088ff"/> <img src="https://img.shields.io/github/forks/amanbasu/3d-prostate-segmentation?color=ff8800"/> <img src="https://img.shields.io/badge/version-tensorflow==1.10.0-green?logo=tensorflow"/> |
|
|
4 |
|
|
|
5 |
Collaborator: [Aman Agarwal](https://amanbasu.github.io), [Aditya Mishra](https://aditya985.github.io) |
|
|
6 |
|
|
|
7 |
This repository contains files related to **Volumetric Segmentation of Prostate from MR |
|
|
8 |
Images Using FCNN with Increased Receptive Field**, presented at **Nvidia GTC 2019** ([link](https://github.com/amanbasu/3d-prostate-segmentation/blob/master/images/Deep%20Learning%20Research_20_P9190_Aman_Agarwal_1920x1607.png)). The dataset is provided by [PROMISE12 challenge](https://promise12.grand-challenge.org). |
|
|
9 |
|
|
|
10 |
DOI [10.1134/S1054661821020024](https://doi.org/10.1134/S1054661821020024) |
|
|
11 |
|
|
|
12 |
# About the files |
|
|
13 |
|
|
|
14 |
- [resizing.py](https://github.com/amanbasu/3d-prostate-segmentation/blob/master/resizing.py): Converts volume of different sizes to same (128x128x64). |
|
|
15 |
- [DataGenerator.py](https://github.com/amanbasu/3d-prostate-segmentation/blob/master/DataGenerator.py): For reading the images and performing various augmentations. |
|
|
16 |
- [train.py](https://github.com/amanbasu/3d-prostate-segmentation/blob/master/train.py): File used for training the model. |
|
|
17 |
- [predict.py](https://github.com/amanbasu/3d-prostate-segmentation/blob/master/predict.py): File used for inferencing of trained model. |
|
|
18 |
- [metric_eval.py](https://github.com/amanbasu/3d-prostate-segmentation/blob/master/metric_eval.py): File for evaluating various metrics using predictions and actual labels. Metrics include Hausdorff distance, dice, boundary distance, volume difference, precision, recall and many more. |
|
|
19 |
|
|
|
20 |
# Introduction |
|
|
21 |
|
|
|
22 |
- Prostate cancer is among the most commonly diagnosed and leading causes of cancer related death in developed countries. |
|
|
23 |
- The early detection of prostate cancer plays a significant role in the success of treatment and outcome. |
|
|
24 |
- Radiologists first segment the prostate image from ultrasound image and then identify the hypoechoic regions which are more likely to exhibit cancer and should be considered for biopsy. |
|
|
25 |
- Manual segmentation consumes considerable time and effort and is not only operator-dependent, but also tedious and repetitious. |
|
|
26 |
- In this work we present an appraoch to segment prostate using Fully Convolutional Neural Networks without requiring the involvement of an expert. |
|
|
27 |
- We were able to obtain results comparable to the state-of-the-art, with the average Dice loss of 0.92 and 0.80 on train and validataion set respectively. |
|
|
28 |
|
|
|
29 |
# Data |
|
|
30 |
|
|
|
31 |
- In MRI images, the voxel intensities and therefore appearance characteristics of the prostate can greatly differ between acquisition protocols, field strengths and scanners. |
|
|
32 |
- Therefore a segmentation algorithm designed for use in clinical practice needs to deal with these issues. |
|
|
33 |
- We decided to use the data from PROMISE12 challenge which included the scans from four different centers |
|
|
34 |
- Haukeland University Hospital (HK), Norway |
|
|
35 |
- Beth Israel Deaconess Medical Center (BIDMC), United States |
|
|
36 |
- University College London (UCL), United Kingdom |
|
|
37 |
- Radboud University Nijmegen Medical Centre (RUNMC), Netherlands. |
|
|
38 |
- Each of the centers provided <b>25 transverse T2-weighted MR images</b>. This resulted in a total of <b>100</b> MR images. |
|
|
39 |
- Details pertaining to the acquisition can be found in the table below. |
|
|
40 |
|
|
|
41 |
 |
|
|
42 |
 |
|
|
43 |
|
|
|
44 |
# Implementation |
|
|
45 |
|
|
|
46 |
We used a modified V-net architecture for segmentation shown in the figure below. |
|
|
47 |
 |
|
|
48 |
|
|
|
49 |
# Training |
|
|
50 |
We trained our model on different GPUs and got the following speedups. |
|
|
51 |
|
|
|
52 |
| GPU configuration | Batch Size | Average Time per Epoch (s) | |
|
|
53 |
| ----------------- | ------------- | -------------------------- | |
|
|
54 |
| Single K80 | 2 | 147 | |
|
|
55 |
| Dual k80 | 2 (1 per GPU) | 102 | |
|
|
56 |
| Single P100 | 2 | 48 | |
|
|
57 |
| Single P100 | 5 | 27 | |
|
|
58 |
|
|
|
59 |
## Evaluation Metrics |
|
|
60 |
|
|
|
61 |
The metrics used in this study are widely used for the evaluation of segmentation algorithms: |
|
|
62 |
1. <b>Dice coefficient</b>: To measure the similarity between output volumes. |
|
|
63 |
2. <b>Absolute relative volume difference</b>: the percentage of the absolute difference between the volumes. |
|
|
64 |
3. <b>Average boundary distance</b>: the average over the shortest distances between the boundary points of the volumes. |
|
|
65 |
4. <b>95% Hausdorff distance</b>: the maximum of the shortest distances between the boundary points of the volumes. 95% percentile makes the metric less sensitive to outliers. |
|
|
66 |
|
|
|
67 |
# Results |
|
|
68 |
|
|
|
69 |
After training for 5700 epochs we got a dice loss of 0.94 and 0.87 on training and validation set. The results were then submitted to the MICCAI PROMISE12 challenge, and we received a score of <b>84.61</b> on the test set. |