a/README.md b/README.md
1
# U-Net for brain segmentation
1
# U-Net for brain segmentation
2
2
3
U-Net implementation in PyTorch for FLAIR abnormality segmentation in brain MRI based on a deep learning segmentation algorithm used in [Association of genomic subtypes of lower-grade gliomas with shape features automatically extracted by a deep learning algorithm](https://doi.org/10.1016/j.compbiomed.2019.05.002).
3
U-Net implementation in PyTorch for FLAIR abnormality segmentation in brain MRI based on a deep learning segmentation algorithm used in [Association of genomic subtypes of lower-grade gliomas with shape features automatically extracted by a deep learning algorithm](https://doi.org/10.1016/j.compbiomed.2019.05.002).
4
4
5
This repository is an all Python port of official MATLAB/Keras implementation in [brain-segmentation](https://github.com/mateuszbuda/brain-segmentation).
5
This repository is an all Python port of official MATLAB/Keras implementation in [brain-segmentation](https://github.com/mateuszbuda/brain-segmentation).
6
Weights for trained models are provided and can be used for inference or fine-tuning on a different dataset.
6
Weights for trained models are provided and can be used for inference or fine-tuning on a different dataset.
7
If you use code or weights shared in this repository, please consider citing:
7
If you use code or weights shared in this repository, please consider citing:
8
8
9
```
9
```
10
@article{buda2019association,
10
@article{buda2019association,
11
  title={Association of genomic subtypes of lower-grade gliomas with shape features automatically extracted by a deep learning algorithm},
11
  title={Association of genomic subtypes of lower-grade gliomas with shape features automatically extracted by a deep learning algorithm},
12
  author={Buda, Mateusz and Saha, Ashirbani and Mazurowski, Maciej A},
12
  author={Buda, Mateusz and Saha, Ashirbani and Mazurowski, Maciej A},
13
  journal={Computers in Biology and Medicine},
13
  journal={Computers in Biology and Medicine},
14
  volume={109},
14
  volume={109},
15
  year={2019},
15
  year={2019},
16
  publisher={Elsevier},
16
  publisher={Elsevier},
17
  doi={10.1016/j.compbiomed.2019.05.002}
17
  doi={10.1016/j.compbiomed.2019.05.002}
18
}
18
}
19
```
19
```
20
20
21
## docker
21
## docker
22
22
23
```
23
```
24
docker build -t brainseg .
24
docker build -t brainseg .
25
```
25
```
26
26
27
```
27
```
28
nvidia-docker run --rm --shm-size 8G -it -v `pwd`:/workspace brainseg
28
nvidia-docker run --rm --shm-size 8G -it -v `pwd`:/workspace brainseg
29
```
29
```
30
30
31
## PyTorch Hub
31
## PyTorch Hub
32
32
33
Loading model using PyTorch Hub: [pytorch.org/hub/mateuszbuda\_brain-segmentation-pytorch\_unet](https://pytorch.org/hub/mateuszbuda_brain-segmentation-pytorch_unet/)
33
Loading model using PyTorch Hub: [pytorch.org/hub/mateuszbuda\_brain-segmentation-pytorch\_unet](https://pytorch.org/hub/mateuszbuda_brain-segmentation-pytorch_unet/)
34
34
35
```python
35
```python
36
import torch
36
import torch
37
model = torch.hub.load('mateuszbuda/brain-segmentation-pytorch', 'unet',
37
model = torch.hub.load('mateuszbuda/brain-segmentation-pytorch', 'unet',
38
    in_channels=3, out_channels=1, init_features=32, pretrained=True)
38
    in_channels=3, out_channels=1, init_features=32, pretrained=True)
39
```
39
```
40
40
41
## data
41
## data
42
42
43
![dataset](./assets/brain-mri-lgg.png)
43
![dataset](https://github.com/mateuszbuda/brain-segmentation-pytorch/blob/master/assets/brain-mri-lgg.png?raw=true)
44
44
45
Dataset used for development and evaluation was made publicly available on Kaggle: [kaggle.com/mateuszbuda/lgg-mri-segmentation](https://www.kaggle.com/mateuszbuda/lgg-mri-segmentation).
45
Dataset used for development and evaluation was made publicly available on Kaggle: [kaggle.com/mateuszbuda/lgg-mri-segmentation](https://www.kaggle.com/mateuszbuda/lgg-mri-segmentation).
46
It contains MR images from [TCIA LGG collection](https://wiki.cancerimagingarchive.net/display/Public/TCGA-LGG) with segmentation masks approved by a board-certified radiologist at Duke University.
46
It contains MR images from [TCIA LGG collection](https://wiki.cancerimagingarchive.net/display/Public/TCGA-LGG) with segmentation masks approved by a board-certified radiologist at Duke University.
47
47
48
## model
48
## model
49
49
50
A segmentation model implemented in this repository is U-Net as described in [Association of genomic subtypes of lower-grade gliomas with shape features automatically extracted by a deep learning algorithm](https://doi.org/10.1016/j.compbiomed.2019.05.002) with added batch normalization.
50
A segmentation model implemented in this repository is U-Net as described in [Association of genomic subtypes of lower-grade gliomas with shape features automatically extracted by a deep learning algorithm](https://doi.org/10.1016/j.compbiomed.2019.05.002) with added batch normalization.
51
51
52
![unet](./assets/unet.png)
52
![unet](https://github.com/mateuszbuda/brain-segmentation-pytorch/blob/master/assets/unet.png?raw=true)
53
53
54
## results
54
## results
55
55
56
|![TCGA_DU_6404_19850629](./assets/TCGA_DU_6404_19850629.gif)|![TCGA_HT_7879_19981009](./assets/TCGA_HT_7879_19981009.gif)|![TCGA_CS_4944_20010208](./assets/TCGA_CS_4944_20010208.gif)|
56
|![TCGA_DU_6404_19850629](https://github.com/mateuszbuda/brain-segmentation-pytorch/blob/master/assets/TCGA_DU_6404_19850629.gif?raw=true)|![TCGA_HT_7879_19981009](https://github.com/mateuszbuda/brain-segmentation-pytorch/blob/master/assets/TCGA_HT_7879_19981009.gif?raw=true)|![TCGA_CS_4944_20010208](https://github.com/mateuszbuda/brain-segmentation-pytorch/blob/master/assets/TCGA_CS_4944_20010208.gif?raw=true)|
57
|:-------:|:-------:|:-------:|
57
|:-------:|:-------:|:-------:|
58
| 94% DSC | 91% DSC | 89% DSC |
58
| 94% DSC | 91% DSC | 89% DSC |
59
59
60
Qualitative results for validation cases from three different institutions with DSC of 94%, 91%, and 89%.
60
Qualitative results for validation cases from three different institutions with DSC of 94%, 91%, and 89%.
61
Green outlines correspond to ground truth and red to model predictions.
61
Green outlines correspond to ground truth and red to model predictions.
62
Images show FLAIR modality after preprocessing. 
62
Images show FLAIR modality after preprocessing. 
63
63
64
![dsc](./assets/dsc.png)
64
![dsc](https://github.com/mateuszbuda/brain-segmentation-pytorch/blob/master/assets/dsc.png?raw=true)
65
65
66
Distribution of DSC for 10 randomly selected validation cases.
66
Distribution of DSC for 10 randomly selected validation cases.
67
The red vertical line corresponds to mean DSC (91%) and the green one to median DSC (92%).
67
The red vertical line corresponds to mean DSC (91%) and the green one to median DSC (92%).
68
Results may be biased since model selection was based on the mean DSC on these validation cases.
68
Results may be biased since model selection was based on the mean DSC on these validation cases.
69
69
70
## inference
70
## inference
71
71
72
1. Download and extract the dataset from [Kaggle](https://www.kaggle.com/mateuszbuda/lgg-mri-segmentation).
72
1. Download and extract the dataset from [Kaggle](https://www.kaggle.com/mateuszbuda/lgg-mri-segmentation).
73
2. Run docker container.
73
2. Run docker container.
74
3. Run `inference.py` script with specified paths to weights and images. Trained weights for input images of size 256x256 are provided in `./weights/unet.pt` file. For more options and help run: `python3 inference.py --help`.
74
3. Run `inference.py` script with specified paths to weights and images. Trained weights for input images of size 256x256 are provided in `./weights/unet.pt` file. For more options and help run: `python3 inference.py --help`.
75
75
76
## train
76
## train
77
77
78
1. Download and extract the dataset from [Kaggle](https://www.kaggle.com/mateuszbuda/lgg-mri-segmentation).
78
1. Download and extract the dataset from [Kaggle](https://www.kaggle.com/mateuszbuda/lgg-mri-segmentation).
79
2. Run docker container.
79
2. Run docker container.
80
3. Run `train.py` script. Default path to images is `./kaggle_3m`. For more options and help run: `python3 train.py --help`.
80
3. Run `train.py` script. Default path to images is `./kaggle_3m`. For more options and help run: `python3 train.py --help`.
81
81
82
Training can be also run using Kaggle kernel shared together with the dataset: [kaggle.com/mateuszbuda/brain-segmentation-pytorch](https://www.kaggle.com/mateuszbuda/brain-segmentation-pytorch).
82
Training can be also run using Kaggle kernel shared together with the dataset: [kaggle.com/mateuszbuda/brain-segmentation-pytorch](https://www.kaggle.com/mateuszbuda/brain-segmentation-pytorch).
83
Due to memory limitations for Kaggle kernels, input images are of size 224x224 instead of 256x256.
83
Due to memory limitations for Kaggle kernels, input images are of size 224x224 instead of 256x256.
84
84
85
Running this code on a custom dataset would likely require adjustments in `dataset.py`.
85
Running this code on a custom dataset would likely require adjustments in `dataset.py`.
86
Should you need help with this, just open an issue.
86
Should you need help with this, just open an issue.
87
87
88
## TensorRT inference
88
## TensorRT inference
89
89
90
If you want to run the model inference with TensorRT runtime, here is a blog post from Nvidia that covers this: [Speeding Up Deep Learning Inference Using TensorRT](https://developer.nvidia.com/blog/speeding-up-deep-learning-inference-using-tensorrt/).
90
If you want to run the model inference with TensorRT runtime, here is a blog post from Nvidia that covers this: [Speeding Up Deep Learning Inference Using TensorRT](https://developer.nvidia.com/blog/speeding-up-deep-learning-inference-using-tensorrt/).