Switch to unified view

a/README.md b/README.md
1
# DR-UNet: A robust deep learning segmentation method for hematoma volumetric detection in intracerebral hemorrhage
1
# DR-UNet: A robust deep learning segmentation method for hematoma volumetric detection in intracerebral hemorrhage
2
2
3
------
3
------
4
4
5
## Structure of DR-UNet
5
## Structure of DR-UNet
6
6
7
This is an implementation of  DR-UNet on Python 3.6, Keras, and TensorFlow2.0. DR-UNet consists of an encoding (down-sampling) path and a decoding (up-sampling) path. The entire workflow of our computational analysis. The entire workflow of our calculation and analysis is shown in the figure below
7
This is an implementation of  DR-UNet on Python 3.6, Keras, and TensorFlow2.0. DR-UNet consists of an encoding (down-sampling) path and a decoding (up-sampling) path. The entire workflow of our computational analysis. 
8
9
<img src="figures/Fig6.png" alt="model structure" style="zoom:150%;" />
10
11
The DR-UNet model structure is shown in the figure below.
12
13
![model structure](figures/Fig0.jpg)
14
15
To increase the segmentation performance of the model, three reduced dimensional residual convolution units (RDRCUs) were developed to replace the traditional convolution layer. The three convolution blocks are illustrated in the following figure. The three RDRCUs have two branches (main branch and side branch) to process the input characteristics continuously.
8
To increase the segmentation performance of the model, three reduced dimensional residual convolution units (RDRCUs) were developed to replace the traditional convolution layer. The three convolution blocks are illustrated in the following figure. The three RDRCUs have two branches (main branch and side branch) to process the input characteristics continuously.
16
9
17
![model structure](figures/Fig7.jpg)
10
18
19
## Experimental results of hematoma segmentation
11
## Experimental results of hematoma segmentation
20
12
21
We first trained DR-UNet to recognize the hematoma region in patients. The performance was evaluated on two testing datasets (internal and external) using the following criteria: i) sensitivity, ii) specificity, iii) precision, iv) Dice, v) Jaccard and vi) VOE (details in the Methods section). Moreover, we compared DR-UNet with UNet, FCM and active contours. In all four methods, segmentation labeling was considered the ground truth standard (details in the Methods section). The main calculation results are shown in the figure and table below.
13
We first trained DR-UNet to recognize the hematoma region in patients. The performance was evaluated on two testing datasets (internal and external) using the following criteria: i) sensitivity, ii) specificity, iii) precision, iv) Dice, v) Jaccard and vi) VOE (details in the Methods section). Moreover, we compared DR-UNet with UNet, FCM and active contours. In all four methods, segmentation labeling was considered the ground truth standard (details in the Methods section). The main calculation results are shown in the figure and table below.
22
14
23
As shown in the table below, results of sensitivity, specificity, precision, Dice, Jaccard and VOE by four methods in the internal testing and the external testing dataset.
15
As shown in the table below, results of sensitivity, specificity, precision, Dice, Jaccard and VOE by four methods in the internal testing and the external testing dataset.
24
16
25
![results](figures/Fig8.jpg)
17
 The internal testing dataset in the retrospective dataset was enriched to include all ICH subtypes. In Figure B, four different types of hematomas were included, and we visually presented a performance comparison among the DR-UNet, UNet, FCM and active contour methods.
26
18
27
Figure A shows the boxplots for the performance of the DR-UNet models and the other three methods for the segmentation and detection of ICHs on the two testing datasets. The internal testing dataset in the retrospective dataset was enriched to include all ICH subtypes. In Figure B, four different types of hematomas were included, and we visually presented a performance comparison among the DR-UNet, UNet, FCM and active contour methods.
19
28
29
<img src="figures/Fig1.jpg" alt="segment result" style="zoom:90%;" />
30
31
## Irregularly shaped and epidural hematoma
20
## Irregularly shaped and epidural hematoma
32
21
33
Results of hematoma volumetric analysis in (A) irregularly shaped hematoma group and (B) subdural and epidural hematoma group. Input ICH images with manual segmentation were denoted by red line. The segmented outputs of the DR-UNet model were denoted by blue lines, the segmented outputs of the UNet were denoted by green lines.
22
Results of hematoma volumetric analysis in (A) irregularly shaped hematoma group and (B) subdural and epidural hematoma group. Input ICH images with manual segmentation were denoted by red line. The segmented outputs of the DR-UNet model were denoted by blue lines, the segmented outputs of the UNet were denoted by green lines.
34
23
35
![segmentaion](figures/Fig3.jpg)
24
36
37
38
Four examples of ICH segmentation in the subdural and epidural hematomas with original images and partially enlarged image in the prospective dataset. 
25
Four examples of ICH segmentation in the subdural and epidural hematomas with original images and partially enlarged image in the prospective dataset. 
39
![segmentation](figures/Fig4.jpg)
26
40
41
Four examples of ICH segmentation in the irregular-shaped hematomas with original images and partially enlarged image in the prospective dataset. 
27
Four examples of ICH segmentation in the irregular-shaped hematomas with original images and partially enlarged image in the prospective dataset. 
42
![](figures/Fig5.jpg)
28
43
44
45
## Calculate the hematoma volume experiment
29
## Calculate the hematoma volume experiment
46
30
47
The hematoma volumetric analysis by DR-UNet, UNet and Coniglobus method. A. The diagnoses of HVs were presented by ground truth and three different methods. B. The correlation plots among ground truth and three different methods. C. The error curves of three methods were plotted. The error curves of three methods were plotted. They were the error of {ground truth – the measurement by DR-UNet}, the error of {ground truth – the measurement by UNet} and the error of {ground truth – the measurement by Coniglobus formula}, respectively. D. The results of RMSE, SD, MAE and averaged time (second/scan).
31
The hematoma volumetric analysis by DR-UNet, UNet and Coniglobus method. A. The diagnoses of HVs were presented by ground truth and three different methods. B. The correlation plots among ground truth and three different methods. C. The error curves of three methods were plotted. The error curves of three methods were plotted. They were the error of {ground truth – the measurement by DR-UNet}, the error of {ground truth – the measurement by UNet} and the error of {ground truth – the measurement by Coniglobus formula}, respectively. D. The results of RMSE, SD, MAE and averaged time (second/scan).
48
32
49
<img src="figures/Fig2.jpg" alt="model structure" style="zoom:70%;" />
33
50
34
51
52
## Getting Started
35
## Getting Started
53
36
54
- [data.py](drunet/data.py)  Used to make your own dataset. Making your own dataset needs to satisfy having original images and the ground truth images. The completed dataset is a unique data format of tensorflow. 
37
- [data.py](drunet/data.py)  Used to make your own dataset. Making your own dataset needs to satisfy having original images and the ground truth images. The completed dataset is a unique data format of tensorflow. 
55
38
56
  ```python
39
  ```python
57
  from data import make_data
40
  from data import make_data
58
  
41
  
59
  # make tfrecords datasets
42
  # make tfrecords datasets
60
  make_data(image_shape, image_dir, mask_dir, out_name, out_dir)
43
  make_data(image_shape, image_dir, mask_dir, out_name, out_dir)
61
  
44
  
62
  # get tfrecords datasets
45
  # get tfrecords datasets
63
  dataset = get_tfrecord_data(
46
  dataset = get_tfrecord_data(
64
      tf_record_path, tf_record_name, data_shape, batch_size=32, repeat=1, shuffle=True)
47
      tf_record_path, tf_record_name, data_shape, batch_size=32, repeat=1, shuffle=True)
65
  ```
48
  ```
66
49
67
- [loss.py](drunet/loss.py)  According to the characteristics of the cerebral hematoma dataset, in order to obtain higher segmentation accuracy. We use binary cross entropy with dice as the loss function of DR-Unet.
50
- [loss.py](drunet/loss.py)  According to the characteristics of the cerebral hematoma dataset, in order to obtain higher segmentation accuracy. We use binary cross entropy with dice as the loss function of DR-Unet.
68
51
69
- [module.py](drunet/module.py) This file contains several auxiliary functions for image processing.
52
- [module.py](drunet/module.py) This file contains several auxiliary functions for image processing.
70
53
71
- [utils.py](drunet/utils.py) This python file contains several auxiliary functions for file operations.
54
- [utils.py](drunet/utils.py) This python file contains several auxiliary functions for file operations.
72
55
73
- [performance.py](drunet/performance.py) In order to evaluate the segmentation performance of the model, this file contains auxiliary functions for the calculation of several common segmentation indicators.
56
- [performance.py](drunet/performance.py) In order to evaluate the segmentation performance of the model, this file contains auxiliary functions for the calculation of several common segmentation indicators.
74
57
75
- [drunet.py](drunet/model/dr_unet.py) This file contains the specific implementation of DR-UNet and three reduced dimensional residual convolution units (RDRCUs).
58
- [drunet.py](drunet/model/dr_unet.py) This file contains the specific implementation of DR-UNet and three reduced dimensional residual convolution units (RDRCUs).
76
59
77
  ```python
60
  ```python
78
  from model import dr_unet
61
  from model import dr_unet
79
  
62
  
80
  model = dr_unet.dr_unet(input_shape=(256, 256, 1))
63
  model = dr_unet.dr_unet(input_shape=(256, 256, 1))
81
  model.summary()
64
  model.summary()
82
  ```
65
  ```
83
66
84
- [segment.py](drunet/segment.py) This file shows how to train, test and verification DR-UNet on your own dataset. Including hematoma segmentation and hematoma volume estimation.
67
- [segment.py](drunet/segment.py) This file shows how to train, test and verification DR-UNet on your own dataset. Including hematoma segmentation and hematoma volume estimation.
85
68
86
  ```python
69
  ```python
87
  import pathlib
70
  import pathlib
88
  
71
  
89
  # Parameter configuration
72
  # Parameter configuration
90
  parser = argparse.ArgumentParser(description="Segment Use Args")
73
  parser = argparse.ArgumentParser(description="Segment Use Args")
91
  parser.add_argument('--model-name', default='DR_UNet', type=str)
74
  parser.add_argument('--model-name', default='DR_UNet', type=str)
92
  parser.add_argument('--dims', default=32, type=int)
75
  parser.add_argument('--dims', default=32, type=int)
93
  parser.add_argument('--epochs', default=50, type=int)
76
  parser.add_argument('--epochs', default=50, type=int)
94
  parser.add_argument('--batch-size', default=16, type=int)
77
  parser.add_argument('--batch-size', default=16, type=int)
95
  parser.add_argument('--lr', default=2e-4, type=float)
78
  parser.add_argument('--lr', default=2e-4, type=float)
96
  
79
  
97
  # Training data, testing, verification parameter settings
80
  # Training data, testing, verification parameter settings
98
  parser.add_argument('--height', default=256, type=int)
81
  parser.add_argument('--height', default=256, type=int)
99
  parser.add_argument('--width', default=256, type=int)
82
  parser.add_argument('--width', default=256, type=int)
100
  parser.add_argument('--channel', default=1, type=int)
83
  parser.add_argument('--channel', default=1, type=int)
101
  parser.add_argument('--pred-height', default=4 * 256, type=int)
84
  parser.add_argument('--pred-height', default=4 * 256, type=int)
102
  parser.add_argument('--pred-width', default=4 * 256, type=int)
85
  parser.add_argument('--pred-width', default=4 * 256, type=int)
103
  parser.add_argument('--total-samples', default=5000, type=int)
86
  parser.add_argument('--total-samples', default=5000, type=int)
104
  parser.add_argument('--invalid-samples', default=1000, type=int)
87
  parser.add_argument('--invalid-samples', default=1000, type=int)
105
  parser.add_argument('--regularize', default=False, type=bool)
88
  parser.add_argument('--regularize', default=False, type=bool)
106
  parser.add_argument('--record-dir', default=r'', type=str, help='the save dir of tfrecord')
89
  parser.add_argument('--record-dir', default=r'', type=str, help='the save dir of tfrecord')
107
  parser.add_argument('--train-record-name', type=str, default=r'train_data', 
90
  parser.add_argument('--train-record-name', type=str, default=r'train_data', 
108
                      help='the train record save name')
91
                      help='the train record save name')
109
  parser.add_argument('--test-image-dir', default=r'', type=str, 
92
  parser.add_argument('--test-image-dir', default=r'', type=str, 
110
                      help='the path of test images dir')
93
                      help='the path of test images dir')
111
  parser.add_argument('--invalid-record-name', type=str, default=r'test_data', 
94
  parser.add_argument('--invalid-record-name', type=str, default=r'test_data', 
112
                      help='the invalid record save name')
95
                      help='the invalid record save name')
113
  parser.add_argument('--gt-mask-dir', default=r'', type=str, 
96
  parser.add_argument('--gt-mask-dir', default=r'', type=str, 
114
                      help='the ground truth dir of validation set')
97
                      help='the ground truth dir of validation set')
115
  parser.add_argument('--invalid-volume-dir', default=r'', type=str, 
98
  parser.add_argument('--invalid-volume-dir', default=r'', type=str, 
116
                      help='estimation bleeding volume')
99
                      help='estimation bleeding volume')
117
  args = parser.parse_args()
100
  args = parser.parse_args()
118
  
101
  
119
  
102
  
120
  segment = Segmentation(args)
103
  segment = Segmentation(args)
121
  # start training
104
  # start training
122
  segment.train() 
105
  segment.train() 
123
  # predict hematoma volume
106
  # predict hematoma volume
124
  segment.predict_blood_volume(input_dir, save_dir, calc_nums=-1, dpi=96, thickness=0.45)
107
  segment.predict_blood_volume(input_dir, save_dir, calc_nums=-1, dpi=96, thickness=0.45)
125
  ```
108
  ```
126
109
127
  [train_segment.py](drunet/train_segment.py) If you want to train the segmentation model, you can run this file directly after filling in the data path.
110
  [train_segment.py](drunet/train_segment.py) If you want to train the segmentation model, you can run this file directly after filling in the data path.
128
  
111
  
129
  ```python
112
  ```python
130
  import segment
113
  import segment
131
  
114
  
132
  if __name__ == '__main__':
115
  if __name__ == '__main__':
133
      Seg = segment.Segmentation()
116
      Seg = segment.Segmentation()
134
      # start training
117
      # start training
135
      Seg.train()
118
      Seg.train()
136
  ```
119
  ```
137
  
120
  
138
  [predict_segment.py](drunet/predict_segment.py) If you want to predict the segmentation result, you can run this file directly after filling in the ct images path.
121
  [predict_segment.py](drunet/predict_segment.py) If you want to predict the segmentation result, you can run this file directly after filling in the ct images path.
139
  
122
  
140
  ```python
123
  ```python
141
  import segment
124
  import segment
142
  
125
  
143
  if __name__ == '__main__':
126
  if __name__ == '__main__':
144
      Seg = segment.Segmentation()
127
      Seg = segment.Segmentation()
145
      # start predict
128
      # start predict
146
      input_dir = r''  # Fill in the image path
129
      input_dir = r''  # Fill in the image path
147
      save_dir = r''  # fill in save path
130
      save_dir = r''  # fill in save path
148
      Seg.predict_and_save(input_dir, save_dir)
131
      Seg.predict_and_save(input_dir, save_dir)
149
  
132
  
150
  ```
133
  ```
151
  
134
  
152
  [predict_volume.py](drunet/predict_volume.py) If you want to predict the complete hematoma volume of a patient, after filling in the path, then you can run this file,
135
  [predict_volume.py](drunet/predict_volume.py) If you want to predict the complete hematoma volume of a patient, after filling in the path, then you can run this file,
153
  
136
  
154
  ```python
137
  ```python
155
  import segment
138
  import segment
156
  
139
  
157
  if __name__ == '__main__':
140
  if __name__ == '__main__':
158
      Seg = segment.Segmentation()
141
      Seg = segment.Segmentation()
159
      # start predict
142
      # start predict
160
      input_dir = r''  # Fill in the image path
143
      input_dir = r''  # Fill in the image path
161
      save_dir = r''  # fill in save path
144
      save_dir = r''  # fill in save path
162
      Seg.predict_blood_volume(input_dir, save_dir, thickness=0.45)
145
      Seg.predict_blood_volume(input_dir, save_dir, thickness=0.45)
163
  ```
146
  ```
164
  
147
  
165
  [test_performance.py](drunet/test_performance.py) If you want to understand the segmentation performance of the model, you need to fill in the relevant path first, and then run this file.
148
  [test_performance.py](drunet/test_performance.py) If you want to understand the segmentation performance of the model, you need to fill in the relevant path first, and then run this file.
166
  
149
  
167
  ```python
150
  ```python
168
  import performance
151
  import performance
169
  
152
  
170
  if __name__ == '__main__':
153
  if __name__ == '__main__':
171
      # test model segmentation performance
154
      # test model segmentation performance
172
      pred_path = r''  # predict result path
155
      pred_path = r''  # predict result path
173
      gt_path = r''  # ground truth path
156
      gt_path = r''  # ground truth path
174
      calc_performance(pred_path, gt_path, img_resize=(1400, 1400))
157
      calc_performance(pred_path, gt_path, img_resize=(1400, 1400))
175
  ```
158
  ```
176
  
159
  
177
  
160
  
178
161
179
## Requirements
162
## Requirements
180
163
181
Python 3.6, TensorFlow 2.1 and other common packages listed in `requirements.txt`.
164
Python 3.6, TensorFlow 2.1 and other common packages listed in `requirements.txt`.
182
165