Diff of /README.md [000000] .. [168bda]

Switch to unified view

a b/README.md
1
# 3D_DenseSeg: 3D Densely Convolutional Networks for Volumetric Segmentation
2
By Toan Duc Bui, Jitae Shin, Taesup Moon
3
4
This is the implementation of our method in the MICCAI Grand Challenge on 6-month infant brain MRI segmentation-in conjunction with MICCAI 2017.
5
6
### Update: (Aug. 14. 2019): We also release the Pytorch version of my journal version at https://github.com/tbuikr/3D-SkipDenseSeg
7
8
### Citation
9
```
10
@article{bui2019skip,
11
  title={Skip-connected 3D DenseNet for volumetric infant brain MRI segmentation},
12
  author={Bui, Toan Duc and Shin, Jitae and Moon, Taesup},
13
  journal={Biomedical Signal Processing and Control},
14
  volume={54},
15
  pages={101613},
16
  year={2019},
17
  publisher={Elsevier}
18
}
19
```
20
21
Link journal version: https://www.sciencedirect.com/science/article/pii/S1746809419301946
22
23
Link conference version: https://arxiv.org/abs/1709.03199
24
25
### Introduction
26
6-month infant brain MRI segmentation aims to segment the brain into: White matter, Gray matter, and Cerebrospinal fluid. It is a difficult task due to larger overlapping between tissues, low contrast intensity. We treat the problem by using very deep 3D convolution neural network. Our result achieved the top performance in 6 performance metrics. 
27
28
### Dice Coefficient (DC) for 9th subject
29
|                   | CSF       | GM             | WM   | Average 
30
|-------------------|:-------------------:|:---------------------:|:-----:|:--------------:|
31
|3D-DenseSeg  | 94.74% | 91.61% |91.30% | 92.55% 
32
33
### Citation
34
```
35
@article{bui20173d,
36
  title={3D Densely Convolution Networks for Volumetric Segmentation},
37
  author={Bui, Toan Duc and Shin, Jitae and Moon, Taesup},
38
  journal={arXiv preprint arXiv:1709.03199},
39
  year={2017}
40
}
41
```
42
43
### Requirements: 
44
- 3D-CAFFE (as below), python 2.7, Ubuntu 14.04, CUDNN 5.1, CUDA 8.0
45
- TiTan X Pascal 12GB
46
47
### Installation
48
- Step 1: Download the source code
49
```
50
git clone https://github.com/tbuikr/3D_DenseSeg.git
51
cd 3D_DenseSeg
52
```
53
- Step 2: Download dataset at `http://iseg2017.web.unc.edu/download/` and change the path of the dataset `data_path` and saved path `target_path` in file `prepare_hdf5_cutedge.py`
54
```
55
data_path = '/path/to/your/dataset/'
56
target_path = '/path/to/your/save/hdf5 folder/'
57
```
58
59
- Step 3: Generate hdf5 dataset
60
61
```
62
python prepare_hdf5_cutedge.py
63
```
64
65
- Step 4: Run training
66
67
```
68
./run_train.sh
69
```
70
71
- Step 5: Generate score map and segmentation image. You have to change the path in the file `seg_deploy.py` as 
72
```data_path = '/path/to/your/dataset/'
73
caffe_root = '/path/to/your/caffe/build/tools/caffe/'# (i.e '/home/toanhoi/caffe/build/tools/caffe/')
74
```
75
76
And run
77
```
78
python seg_deploy.py
79
```
80
81
### 3D CAFFE
82
For CAFFE, we use 3D UNet CAFFE with minor modification. Hence, you first download the 3D UNet CAFFE at
83
84
`https://lmb.informatik.uni-freiburg.de/resources/opensource/unet.en.html`
85
86
And run the installation as the README file. Then we change the HDF5DataLayer that allows to randomly crop patch based on the code at `https://github.com/yulequan/3D-Caffe`
87
You can download the code by
88
```
89
git clone https://github.com/yulequan/3D-Caffe/
90
cd 3D-Caffe
91
git checkout 3D-Caffe
92
cd ../
93
```
94
95
After downloading both source codes, we have two folder code `3D-Caffe` and `caffe` (for 3D UNet CAFFE). We have to copy the hdf5 data files from `3D-Caffe` to `caffe` by the commands
96
97
```
98
cp ./3D-Caffe/src/caffe/layers/hdf5_data_layer.cpp ./caffe/src/caffe/layers/
99
cp ./3D-Caffe/src/caffe/layers/hdf5_data_layer.cu ./caffe/src/caffe/layers/
100
cp ./3D-Caffe/include/caffe/layers/hdf5_data_layer.hpp ./caffe/include/caffe/layers/hdf5_data_layer.hpp
101
```
102
103
Then add these lines in the field `message TransformationParameter` of the file  `caffe.proto` in the `./caffe/src/caffe/proto`
104
 (3D UNet CAFFE)
105
```
106
optional uint32 crop_size_w = 8 [default = 0];
107
optional uint32 crop_size_h = 9 [default = 0];
108
optional uint32 crop_size_l = 10 [default = 0];
109
```
110
111
Add following code in the `./caffe/include/caffe/filler.hpp`
112
113
```
114
/**
115
  3D bilinear filler 
116
*/
117
template <typename Dtype>
118
class BilinearFiller_3D : public Filler<Dtype> {
119
 public:
120
  explicit BilinearFiller_3D(const FillerParameter& param)
121
      : Filler<Dtype>(param) {}
122
  virtual void Fill(Blob<Dtype>* blob) {
123
    CHECK_EQ(blob->num_axes(), 5) << "Blob must be 5 dim.";
124
    CHECK_EQ(blob->shape(-1), blob->shape(-2)) << "Filter must be square";
125
    CHECK_EQ(blob->shape(-2), blob->shape(-3)) << "Filter must be square";
126
    Dtype* data = blob->mutable_cpu_data();
127
128
    int f = ceil(blob->shape(-1) / 2.);
129
    float c = (2 * f - 1 - f % 2) / (2. * f);
130
    for (int i = 0; i < blob->count(); ++i) {
131
      float x = i % blob->shape(-1);
132
      float y = (i / blob->shape(-1)) % blob->shape(-2);
133
      float z = (i/(blob->shape(-1)*blob->shape(-2))) % blob->shape(-3);
134
      data[i] = (1 - fabs(x / f - c)) * (1 - fabs(y / f - c)) * (1-fabs(z / f - c));
135
    }
136
    
137
138
    CHECK_EQ(this->filler_param_.sparse(), -1)
139
         << "Sparsity not supported by this Filler.";
140
  }
141
};
142
```
143
144
and in the `GetFiller(const FillerParameter& param)` function (same file)
145
146
```
147
else if (type == "bilinear_3D"){
148
    return new BilinearFiller_3D<Dtype>(param);
149
  }
150
 ```
151
152
Final, you recompile 3D UNet CAFFE (uncomment `USE_CUDNN := 1`) and can you my prototxt. Please cite these papers when you use the CAFFE code
153
154
###
155
### Note
156
- If you want to generate network prototxt, you have to change the path of `caffe_root`
157
```
158
caffe_root = '/path/to/your/caffe/build/tools/caffe/'# (i.e '/home/toanhoi/caffe/build/tools/caffe/')
159
```
160
And run
161
```
162
python make_3D_DenseSeg.py
163
```
164
- If you have the error `AttributeError: 'LayerParameter' object has no attribute 'shuffle'` when run  `python make_3D_DenseSeg.py`, then you can fix it by replacing the line 35 in the `net_spec.py`:
165
```
166
  #param_names = [s for s in dir(layer) if s.endswith('_param')]
167
  param_names = [f.name for f in layer.DESCRIPTOR.fields if f.name.endswith('_param')]
168
  ```
169
- Plot training loss during training
170
171
```
172
python plot_trainingloss.py ./log/3D_DenseSeg.log 
173
```
174