|
a |
|
b/README.md |
|
|
1 |
# How to config |
|
|
2 |
|
|
|
3 |
The config file includes data path, optimizer, scheduler, etc, ... |
|
|
4 |
|
|
|
5 |
In each configure file: |
|
|
6 |
- stages/data_params/root: To the folder where stores image data. |
|
|
7 |
- image_size: determine the size of image |
|
|
8 |
|
|
|
9 |
Note: |
|
|
10 |
|
|
|
11 |
You do not need to change: `train_csv` and `valid_csv` because they are overrided by running bash file bellow. |
|
|
12 |
|
|
|
13 |
# Preprocessing |
|
|
14 |
The following data is used for different models. |
|
|
15 |
|
|
|
16 |
* 3 windows (3w) data: |
|
|
17 |
```bash |
|
|
18 |
python src/preprocessing.py extract-images --inputdir <kaggle_input_dir> --outputdir <output_folder> |
|
|
19 |
``` |
|
|
20 |
|
|
|
21 |
* 3 windows (3w) with crop data: |
|
|
22 |
```bash |
|
|
23 |
python src/preprocessing_3w.py extract-images --inputdir <kaggle_input_dir> --outputdir <output_folder> |
|
|
24 |
``` |
|
|
25 |
|
|
|
26 |
* 3d data: |
|
|
27 |
```bash |
|
|
28 |
python src/preprocessing2.py |
|
|
29 |
``` |
|
|
30 |
|
|
|
31 |
|
|
|
32 |
# How to run |
|
|
33 |
* Start docker: |
|
|
34 |
```bash |
|
|
35 |
make run |
|
|
36 |
make exec |
|
|
37 |
cd /kaggle-rsna/ |
|
|
38 |
``` |
|
|
39 |
|
|
|
40 |
* Train `resnet18, resnet34, resnet50, alexnet` with `3 windows (3w)` setting: |
|
|
41 |
|
|
|
42 |
```bash |
|
|
43 |
bash bin/train_bac_3w.sh |
|
|
44 |
``` |
|
|
45 |
|
|
|
46 |
Note: normalize=True |
|
|
47 |
|
|
|
48 |
* Train `resnet50` with `3d` setting: |
|
|
49 |
|
|
|
50 |
```bash |
|
|
51 |
bash bin/train_bac_3d.sh |
|
|
52 |
``` |
|
|
53 |
Note: normalize=False |
|
|
54 |
|
|
|
55 |
* Train `densenet169` with `3 windows and crop` setting: |
|
|
56 |
|
|
|
57 |
```bash |
|
|
58 |
bash bin/train_toan.sh |
|
|
59 |
bash bin/train_toan_resume.sh |
|
|
60 |
``` |
|
|
61 |
Note: normalize=True |
|
|
62 |
|
|
|
63 |
where: |
|
|
64 |
- CUDA_VISIBLE_DEVICES: GPUs number required to train. |
|
|
65 |
- LOGDIR: Output folder which stores the checkpoints, logs, etc. |
|
|
66 |
- model_name: the name of model to be trained. The script supports the name of model in [here](https://github.com/creafz/pytorch-cnn-finetune) |
|
|
67 |
- It is better to create a `wandb` account, it will help you track your log, backup the code, store the checkpoints on the |
|
|
68 |
could in real-time. If you dont want to use `wandb`, please set: `WANDB=0` |
|
|
69 |
|
|
|
70 |
|
|
|
71 |
Output: |
|
|
72 |
|
|
|
73 |
The best checkpoint is saved at: `${LOGDIR}/${log_name}/checkpoints/best.pth`. |
|
|
74 |
|
|
|
75 |
# How to test |
|
|
76 |
|
|
|
77 |
```bash |
|
|
78 |
python src/inference.py |
|
|
79 |
``` |
|
|
80 |
Check function `predict_test_tta_ckp` for more information, you may want to change the path, the name of model and the output path. |
|
|
81 |
For `3d` setting, `normalization=False`, otherwise `normalization=True` |
|
|
82 |
|
|
|
83 |
|
|
|
84 |
# Ensemble KFOLD |
|
|
85 |
In `src/ensemble.py`, you should change the prediction path of each fold of model and the name of output ensemble. |
|
|
86 |
```bash |
|
|
87 |
python src/ensemble.py |
|
|
88 |
``` |