Diff of /monai 0.5.0/README.md [000000] .. [83198a]

Switch to unified view

a b/monai 0.5.0/README.md
1
![Salmon-logo-1](images/salmon.JPG)
2
# SALMON v.2: Segmentation deep learning ALgorithm based on MONai toolbox
3
- SALMON is a computational toolbox for segmentation using neural networks (3D patches-based segmentation)
4
- SALMON is based on  NN-UNET and MONAI: PyTorch-based, open-source frameworks for deep learning in healthcare imaging. 
5
(https://github.com/Project-MONAI/MONAI)
6
(https://github.com/MIC-DKFZ/nnUNet)
7
8
This is my "open-box" version if I want to modify the parameters for some particular task, while the two above are hard-coded.
9
10
*******************************************************************************
11
## Requirements
12
Follow the steps in "installation_commands.txt". Installation via Anaconda and creation of a virtual env to download the python libraries and pytorch/cuda.
13
*******************************************************************************
14
## Python scripts and their function
15
16
- organize_folder_structure.py: Organize the data in the folder structure (training,validation,testing) for the network. 
17
Labels are resampled and resized to the corresponding image, to avoid array size conflicts. You can set here a new image resolution for the dataset. 
18
19
- init.py: List of options used to train the network. 
20
21
- check_loader_patches: Shows example of patches fed to the network during the training.  
22
23
- networks.py: The architecture available for segmentation is a nn-Unet.
24
25
- train.py: Runs the training
26
27
- predict_single_image.py: It launches the inference on a single input image chosen by the user.
28
*******************************************************************************
29
## Usage
30
### Folders structure:
31
32
Use first "organize_folder_structure.py" to create organize the data.
33
Modify the input parameters to select the two folders: images and labels folders with the dataset. Set the resolution of the images here before training.
34
35
    .
36
    ├── Data_folder                   
37
    |   ├── CT               
38
    |   |   ├── 1.nii 
39
    |   |   ├── 2.nii   
40
    |   |   └── 3.nii                     
41
    |   ├── CT_labels                         
42
    |   |   ├── 1.nii 
43
    |   |   ├── 2.nii   
44
    |   |   └── 3.nii  
45
46
Data structure after running it:
47
48
    .
49
    ├── Data_folder  
50
    |   ├── CT  
51
    |   ├── CT_labels 
52
    |   ├── images              
53
    |   |   ├── train             
54
    |   |   |   ├── image1.nii              
55
    |   |   |   └── image2.nii                     
56
    |   |   └── val             
57
    |   |   |   ├── image3.nii             
58
    |   |   |   └── image4.nii
59
    |   |   └── test             
60
    |   |   |   ├── image5.nii              
61
    |   |   |   └── image6.nii
62
    |   ├── labels              
63
    |   |   ├── train             
64
    |   |   |   ├── label1.nii              
65
    |   |   |   └── label2.nii                     
66
    |   |   └── val             
67
    |   |   |   ├── label3.nii             
68
    |   |   |   └── label4.nii
69
    |   |   └── test             
70
    |   |   |   ├── label5.nii              
71
    |   |   |   └── label6.nii
72
    
73
*******************************************************************************
74
### Training:
75
- Modify the "init.py" to set the parameters and start the training/testing on the data. Read the descriptions for each parameter.
76
- Afterwards launch the "train.py" for training. Tensorboard is available to monitor the training ("runs" folder created)   
77
- Check and modify the train_transforms applied to the images  in "train.py" for your specific case. (e.g. In the last update there is a HU windowing for CT images)
78
79
Sample images: the following images show the segmentation of carotid artery from MRI sequence
80
81
![Image](images/image.gif)![result](images/result.gif)
82
83
Sample images: the following images show the multi-label segmentation of prostate transition zone and peripheral zone from MRI sequence
84
85
![Image1](images/prostate.gif)![result1](images/prostate_inf.gif)!
86
87
*******************************************************************************
88
### Inference:
89
- Launch "predict_single_image.py" to test the network. Modify the parameters in the parse section to select the path of the weights, images to infer and result. 
90
- You can test the model on a new image, with different size and resolution from the training. The script will resample it before the inference and give you a mask
91
with same size and resolution of the source image.
92
*******************************************************************************
93
### Tips:
94
- Use and modify "check_loader_patches.py" to check the patches fed during training. 
95
- The "networks.py" calls the nn-Unet, which adapts itself to the input data (resolution and patches size). The script also saves the graph of you network, so you can visualize it. 
96
- Is it possible to add other networks, but for segmentation the U-net architecture is the state of the art.
97
98
### Sample script inference
99
- The label can be omitted (None) if you segment an unknown image. You have to add the --resolution if you resampled the data during training (look at the argsparse in the code).
100
```console
101
python predict_single_image.py --image './Data_folder/image.nii' --label './Data_folder/label.nii' --result './Data_folder/prova.nii' --weights './best_metric_model.pth'
102
```
103
*******************************************************************************
104
### Multi-channel segmentation: 
105
106
The subfolder "multi_label_segmentation_example" include the modified code for multi_labels scenario.
107
The example segment the prostate (1 channel input) in the transition zone and peripheral zone (2 channels output). 
108
The gif files with some example images are shown above.
109
110
Some note:
111
- You must add an additional channel for the background. Example: 0 background, 1 prostate, 2 prostate tumor = 3 out channels in total.
112
- Tensorboard can show you all segmented channels, but for now the metric is the Mean-Dice (of all channels). If you want to evaluate the Dice score for each channel you 
113
  have to modify a bit the plot_dice function. I will do it...one day...who knows...maybe not
114
- The loss is the DiceLoss + CrossEntropy. You can modify it if you want to try others (https://docs.monai.io/en/latest/losses.html#diceloss)
115
116
Check more examples at https://github.com/Project-MONAI/tutorials/blob/master/3d_segmentation/spleen_segmentation_3d.ipynb.