|
a |
|
b/README.md |
|
|
1 |
# Automated Tissue Segmentation from High-Resolution 3D Steady-State MRI with Deep Learning |
|
|
2 |
|
|
|
3 |
Albert Ugwudike, Joe Arrowsmith, Joonsu Gha, Kamal Shah, Lapo Rastrelli, Olivia Gallupova, Pietro Vitiello |
|
|
4 |
|
|
|
5 |
--- |
|
|
6 |
|
|
|
7 |
### 2D Models Implemented |
|
|
8 |
|
|
|
9 |
- [x] SegNet |
|
|
10 |
- [x] Vanilla UNet |
|
|
11 |
- [x] Attention UNet |
|
|
12 |
- [x] <del> Multi-res UNet </del> |
|
|
13 |
- [x] R2_UNet |
|
|
14 |
- [x] R2_Attention UNet |
|
|
15 |
- [x] UNet++ |
|
|
16 |
- [x] 100-layer Tiramisu |
|
|
17 |
- [x] DeepLabv3+ |
|
|
18 |
|
|
|
19 |
### 3D Models Implemented |
|
|
20 |
|
|
|
21 |
- [x] 3D UNet |
|
|
22 |
- [x] Relative 3D UNet |
|
|
23 |
- [x] Slice 3D UNet |
|
|
24 |
- [x] VNet |
|
|
25 |
- [x] Relative VNet |
|
|
26 |
- [x] Slice VNet |
|
|
27 |
|
|
|
28 |
--- |
|
|
29 |
|
|
|
30 |
## Results |
|
|
31 |
|
|
|
32 |
### Baseline Comparision of 3D Methods |
|
|
33 |
|
|
|
34 |
| Model | Input Shape | Loss | Val Loss | Duration / Min | |
|
|
35 |
|---------------------------|-------------------|-------|----------|-----------------| |
|
|
36 |
| Small Highwayless 3D UNet | (160,160,160) | 0.777 | 0.847 | 86.6 | |
|
|
37 |
| Small 3D UNet | (160,160,160) | 0.728 | 0.416 | 89.1 | |
|
|
38 |
| Small Relative 3D UNet | (160,160,160),(3) | 0.828 | 0.889 | 90.1 | |
|
|
39 |
| Small VNet | (160,160,160) | 0.371 | 0.342 | 89.5 | |
|
|
40 |
|
|
|
41 |
#### Small 3D Unet Highwayless (160,160,160) |
|
|
42 |
|
|
|
43 |
Training Loss | Training Progress |
|
|
44 |
:------------:|:---------------------------: |
|
|
45 |
 |  |
|
|
46 |
|
|
|
47 |
|
|
|
48 |
<br /> |
|
|
49 |
|
|
|
50 |
#### Small 3D Unet (160,160,160) |
|
|
51 |
|
|
|
52 |
Training Loss | Training Progress |
|
|
53 |
:------------:|:---------------------------: |
|
|
54 |
 |  |
|
|
55 |
|
|
|
56 |
|
|
|
57 |
<br /> |
|
|
58 |
|
|
|
59 |
#### Small Relative 3D Unet (160,160,160),(3) |
|
|
60 |
|
|
|
61 |
Training Loss | Training Progress |
|
|
62 |
:------------:|:---------------------------: |
|
|
63 |
 |  |
|
|
64 |
|
|
|
65 |
|
|
|
66 |
<br /> |
|
|
67 |
|
|
|
68 |
#### Small VNet (160,160,160) |
|
|
69 |
|
|
|
70 |
Training Loss | Training Progress |
|
|
71 |
:------------:|:---------------------------: |
|
|
72 |
 |  |
|
|
73 |
|
|
|
74 |
|
|
|
75 |
--- |
|
|
76 |
|
|
|
77 |
### Comparison of VNet Methods |
|
|
78 |
|
|
|
79 |
| Model | Input Shape | Loss | Val Loss | Roll Loss | Roll Val Loss |Duration / Min| |
|
|
80 |
|:--------------:|:-----------------:|:-------------:|:--------------:|:-------------:|:-------------:|:------------:| |
|
|
81 |
| Tiny | (64,64,64) | 0.627 ± 0.066 | 0.684 ± 0.078 | 0.652 ± 0.071 | 0.686 ± 0.077 | 61.5 ± 5.32 | |
|
|
82 |
| Tiny | (160,160,160) | 0.773 ± 0.01 | 0.779 ± 0.019 | 0.778 ± 0.007 | 0.787 ± 0.016 | 101.8 ± 2.52 | |
|
|
83 |
| Small | (160,160,160) | 0.648 ± 0.156 | 0.676 ± 0.106 | 0.656 ± 0.152 | 0.698 ± 0.076 | 110.1 ± 4.64 | |
|
|
84 |
| Small Relative | (160,160,160),(3) | 0.653 ± 0.168 | 0.639 ± 0.176 | 0.659 ± 0.167 | 0.644 ± 0.172 | 104.6 ± 9.43 | |
|
|
85 |
| Slice | (160,160,5) | 0.546 ± 0.019 | 0.845 ± 0.054 | 0.559 ± 0.020 | 0.860 ± 0.072 | 68.6 ± 9.68 | |
|
|
86 |
| Small | (240,240,160) | 0.577 ± 0.153 | 0.657 ± 0.151 | 0.583 ± 0.151 | 0.666 ± 0.149 | 109.7 ± 0.37 | |
|
|
87 |
| Large | (240,240,160) | 0.505 ± 0.262 | 0.554 ± 0.254 | 0.508 ± 0.262 | 0.574 ± 0.243 | 129.2 ± 0.50 | |
|
|
88 |
| Large Relative | (240,240,160),(3) | 0.709 ± 0.103 | 0.880 ± 0.078 | 0.725 ± 0.094 | 0.913 ± 0.081 | 148.6 ± 0.20 | |
|
|
89 |
|
|
|
90 |
``` |
|
|
91 |
Baseline results from training VNet models for 50 epochs, exploring how quick models converge. Models optimized for dice loss using a scheduled Adam optimizier. Start learning rate: $5e^{-5}$, Schedule drop: $0.9$, Schedule drop epoch frequency: $3$. Z-Score normalisation and replacement of outliers with mean pixel was applied to inputs. Subsamples were selected normally distributed from the centre. Github commit: cb39158 |
|
|
92 |
|
|
|
93 |
Optimal training session is choosen for each visulation. |
|
|
94 |
``` |
|
|
95 |
|
|
|
96 |
#### Tiny VNet (64,64,64) |
|
|
97 |
|
|
|
98 |
Training Loss | Training Progress |
|
|
99 |
:------------:|:---------------------------: |
|
|
100 |
 |  |
|
|
101 |
|
|
|
102 |
#### Tiny VNet (160,160,160) |
|
|
103 |
|
|
|
104 |
Training Loss | Training Progress |
|
|
105 |
:------------:|:---------------------------: |
|
|
106 |
 |  |
|
|
107 |
|
|
|
108 |
#### Small VNet (160,160,160) |
|
|
109 |
|
|
|
110 |
Training Loss | Training Progress |
|
|
111 |
:------------:|:---------------------------: |
|
|
112 |
 |  |
|
|
113 |
|
|
|
114 |
#### Small Relative VNet (160,160,160),(3) |
|
|
115 |
|
|
|
116 |
Training Loss | Training Progress |
|
|
117 |
:------------:|:---------------------------: |
|
|
118 |
 |  |
|
|
119 |
|
|
|
120 |
#### Small Slice VNet (160,160,5) |
|
|
121 |
|
|
|
122 |
Training Loss | Training Progress |
|
|
123 |
:------------:|:---------------------------: |
|
|
124 |
 |  |
|
|
125 |
|
|
|
126 |
#### Small VNet (240,240,160) |
|
|
127 |
|
|
|
128 |
Training Loss | Training Progress |
|
|
129 |
:------------:|:---------------------------: |
|
|
130 |
 |  |
|
|
131 |
|
|
|
132 |
#### Large VNet (240,240,160) |
|
|
133 |
|
|
|
134 |
Training Loss | Training Progress |
|
|
135 |
:------------:|:---------------------------: |
|
|
136 |
 |  |
|
|
137 |
|
|
|
138 |
#### Large Relative VNet (240,240,160),(3) |
|
|
139 |
|
|
|
140 |
Training Loss | Training Progress |
|
|
141 |
:------------:|:---------------------------: |
|
|
142 |
 |  |
|
|
143 |
|
|
|
144 |
--- |
|
|
145 |
|
|
|
146 |
### Useful Code Snippets |
|
|
147 |
|
|
|
148 |
``` Bash |
|
|
149 |
Run 3D Train |
|
|
150 |
|
|
|
151 |
python Segmentation/model/vnet_train.py |
|
|
152 |
``` |
|
|
153 |
|
|
|
154 |
``` Bash |
|
|
155 |
Unit-Testing and Unit-Test Converage |
|
|
156 |
|
|
|
157 |
python -m pytest --cov-report term-missing:skip-covered --cov=Segmentation && coverage html && open ./htmlcov.index.html |
|
|
158 |
``` |
|
|
159 |
|
|
|
160 |
``` Bash |
|
|
161 |
Start tensorboard on Pompeii |
|
|
162 |
|
|
|
163 |
On pompeii: tensorboard --logdir logs --samples_per_plugin images=100 |
|
|
164 |
|
|
|
165 |
On your local machine: ssh -L 16006:127.0.0.1:6006 username@ip |
|
|
166 |
|
|
|
167 |
Go to localhost: http://localhost:16006/ |
|
|
168 |
``` |
|
|
169 |
|
|
|
170 |
--- |
|
|
171 |
|
|
|
172 |
### Valid 3D Configs |
|
|
173 |
|
|
|
174 |
Batch / GPU | Crop Size | Depth Crop Size | Num Channels | Num Conv Layers | Kernel Size |
|
|
175 |
:----------:|:--------:|:---------------:|:------------:|:---------------:|:----------: |
|
|
176 |
1 | 32 | 32 | 20 | 2 | (5,5,5) |
|
|
177 |
1 | 64 | 64 | 32 | 2 | (3,3,3) |
|
|
178 |
1 | 64 | 64 | 32 | 2 | (5,5,5) |
|
|
179 |
3 | 64 | 32 | 16 | 2 | (3,3,3) |