|
a |
|
b/skull-stripping/README.md |
|
|
1 |
## Deep learning based skull stripping |
|
|
2 |
|
|
|
3 |
This folder contains an implementation of our deep learning based skull removal algorithm based on FLAIR modality MRI. |
|
|
4 |
It can be used to preprocess MRI images, train or fine-tune the network for skull stripping or apply it to a custom dataset. |
|
|
5 |
|
|
|
6 |
### Results |
|
|
7 |
Some qualitative results for the worst (94.76% DSC) and best case (96.62% DSC) from the test set before postprocessing. |
|
|
8 |
Notice that the reason for suboptimal performance of the deep learning based segmentation (red outline) is that the ground truth (blue outline) is also imperfect since it was generated using another automatic skull stripping tool. |
|
|
9 |
|
|
|
10 |
| Worst Case | Best Case | |
|
|
11 |
|:----------:|:---------:| |
|
|
12 |
||| |
|
|
13 |
|
|
|
14 |
The average Dice similarity coefficient (DSC) for this split was 95.72%. |
|
|
15 |
The distribution of DCS is shown below. |
|
|
16 |
|
|
|
17 |
 |
|
|
18 |
|
|
|
19 |
Training log for a random 5 test cases split: |
|
|
20 |
|
|
|
21 |
 |
|
|
22 |
|
|
|
23 |
### Usage |
|
|
24 |
|
|
|
25 |
#### Preprocessing |
|
|
26 |
You need to have a folder with images preprocessed using provided matlab function `preprocessing3D.m`. |
|
|
27 |
It rescales them to have spacial dimensions 256x256 and performs contrast normalization. |
|
|
28 |
Refer to the documentation of `preprocessing3D.m` function for more details. |
|
|
29 |
The main requirement for the following steps is to have image names in format `<case_id>_<slice_number>.tif` and corresponding masks named `<case_id>_<slice_number>_mask.tif`. |
|
|
30 |
|
|
|
31 |
#### Training |
|
|
32 |
The training script `train.py` has variables defined at the top that you need to set: |
|
|
33 |
|
|
|
34 |
- `train_images_path` - folder containing training images |
|
|
35 |
- `valid_images_path` - folder containing validation images |
|
|
36 |
|
|
|
37 |
Other variables can be changed to adjust some training parameters. |
|
|
38 |
Then, run the training using |
|
|
39 |
``` |
|
|
40 |
python train.py |
|
|
41 |
``` |
|
|
42 |
|
|
|
43 |
#### Testing |
|
|
44 |
To run the inference, you need to set up some variables defined at the top of the `test.py` script: |
|
|
45 |
|
|
|
46 |
- `weights_path` - path to the trained weights |
|
|
47 |
- `train_images_path` - folder containing training images to compute the mean and standard deviation for data normalization; if you pass your own mean and standard deviation to the `test` function, this variable is not used |
|
|
48 |
- `test_images_path` - folder with test images for prediction; it must contain corresponding mask files as well, however, they can be dummy (all zeros) |
|
|
49 |
- `predictions_path` - folder for saving predicted and ground truth segmentation outlines (will be created if it doesn't exist) |
|
|
50 |
|
|
|
51 |
When all variables are set up, run the inference using |
|
|
52 |
``` |
|
|
53 |
python test.py |
|
|
54 |
``` |
|
|
55 |
|
|
|
56 |
If you want to use our trained weights for inference, you should use mean and standard deviation values for normalization computed on our training set. |
|
|
57 |
They are the default parameter values used in the `test` function of `test.py` script. |
|
|
58 |
|
|
|
59 |
Trained weights can be downloaded using provided script |
|
|
60 |
``` |
|
|
61 |
./download_weights.sh |
|
|
62 |
``` |