a b/README.md
1
# 3D MRI Brain Tumor Segmentation Using Autoencoder Regularization
2
3
[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/3d-mri-brain-tumor-segmentation-using/brain-tumor-segmentation-on-brats-2018)](https://paperswithcode.com/sota/brain-tumor-segmentation-on-brats-2018?p=3d-mri-brain-tumor-segmentation-using)
4
![Keras](https://img.shields.io/badge/Implemented%20in-Keras-red.svg)
5
6
## *[No longer under active development since 2020!]*
7
8
![The model architecture](https://www.suyogjadhav.com/images/misc/brats2018_sota_model.png)
9
<center><b>The Model Architecture</b></center><br /><center>Source: https://arxiv.org/pdf/1810.11654.pdf</center>
10
<br /><br />
11
12
Keras implementation of the paper <b>3D MRI brain tumor segmentation using autoencoder regularization</b> by Myronenko A. (https://arxiv.org/abs/1810.11654). The author (team name: <b>NVDLMED</b>) ranked #1 on the <a href="https://www.med.upenn.edu/sbia/brats2018/" target="_blank">BraTS 2018</a> leaderboard using the model described in the paper.
13
14
This repository contains the model complete with the loss function, all implemented end-to-end in Keras. The usage is described in the next section.
15
16
# Usage
17
1. Download the file [`model.py`](model.py) and keep in the same folder as your project notebook/script.
18
19
2. In your python script, import `build_model` function from `model.py`.
20
21
   ```python
22
   from model import build_model
23
   ```
24
25
   It will automatically download an additional script needed for the implementation, namely [`group_norm.py`](https://github.com/titu1994/Keras-Group-Normalization/blob/master/group_norm.py), which contains keras implementation for the group normalization layer.
26
27
3. Note that the input MRI scans you are going to feed need to have 4 dimensions, with <b>channels-first</b> format. i.e., the shape should look like (c, H, W, D), where:
28
- `c`, the no.of channels are divisible by 4.
29
- `H`, `W`, `D`, which are height, width and depth, respectively, are _all_ divisible by 2<sup>4</sup>, i.e., 16.
30
 This is to get correct output shape according to the model.
31
32
4. Now to create the model, simply run:
33
34
   ```python
35
   model = build_model(input_shape, output_channels)
36
   ```
37
38
   where, `input_shape` is a 4-tuple (channels, Height, Width, Depth) and `output_channels` is the no. of channels in the output of the model.
39
   The output of the model will be the segmentation map generated by the model with the shape (output_channels, Height, Width, Depth), where Height, Width and Depth will be same as that of the input.
40
41
# Example on BraTS2018 dataset
42
43
Go through the [Example_on_BRATS2018](Example_on_BRATS2018.ipynb) notebook to see an example where this model is used on the BraTS2018 dataset.
44
45
You can also test-run the example on Google Colaboratory by clicking the following button.
46
47
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/IAmSuyogJadhav/3d-mri-brain-tumor-segmentation-using-autoencoder-regularization/blob/master/Example_on_BRATS2018.ipynb)
48
49
However, note that you will need to have access to the BraTS2018 dataset before running the example on Google Colaboratory. If you already have access to the dataset, You can simply upload the dataset to Google Drive and input the dataset path in the example notebook.
50
51
# Issues
52
53
If you encounter any issue or have a feedback, please don't hesitate to [raise an issue](https://github.com/IAmSuyogJadhav/3d-mri-brain-tumor-segmentation-using-autoencoder-regularization/issues/new).
54
55
# Updates
56
- Thanks to [@Crispy13](https://github.com/Crispy13), issues #29 and #24 are now fixed. VAE branch output was earlier not being included in the model's output. The current format model gives out two outputs: the segmentation map and the VAE output. The VAE branch weights were not being trained for some reason. The issue should be fixed now. Dice score calculation is slightly modified to work for any batch size. SpatialDropout3D is now used instead of Dropout, as specified in the paper.
57
- Added an [example notebook](Example_on_BRATS2018.ipynb) showing how to run the model on the BraTS2018 dataset.
58
- Added a minus term before `loss_dice` in the loss function. From discussion in #7 with [@woodywff](https://github.com/woodywff) and [@doc78](https://github.com/doc78).
59
- Thanks to [@doc78](https://github.com/doc78) , the NaN loss problem has been permanently fixed.
60
- The NaN loss problem has now been fixed (clipping the activations for now).
61
- Added an argument in the `build_model` function to allow for different no. of channels in the output.