|
a |
|
b/docs/3.detailed_config.md |
|
|
1 |
# Configuration item |
|
|
2 |
|
|
|
3 |
### data_cfg |
|
|
4 |
* Data configuration |
|
|
5 |
> |
|
|
6 |
> * Args |
|
|
7 |
> * dataset_name: Only support `CASIA-B` and `OUMVLP` now. |
|
|
8 |
> * dataset_root: The path of storing your dataset. |
|
|
9 |
> * num_workers: The number of workers to collect data. |
|
|
10 |
> * dataset_partition: The path of storing your dataset partition file. It splits the dataset to two parts, including train set and test set. |
|
|
11 |
> * cache: If `True`, load all data to memory during buiding dataset. |
|
|
12 |
> * test_dataset_name: The name of test dataset. |
|
|
13 |
---- |
|
|
14 |
|
|
|
15 |
### loss_cfg |
|
|
16 |
* Loss function |
|
|
17 |
> * Args |
|
|
18 |
> * type: Loss function type, support `TripletLoss` and `CrossEntropyLoss`. |
|
|
19 |
> * loss_term_weight: loss weight. |
|
|
20 |
> * log_prefix: the prefix of loss log. |
|
|
21 |
|
|
|
22 |
---- |
|
|
23 |
### optimizer_cfg |
|
|
24 |
* Optimizer |
|
|
25 |
> * Args |
|
|
26 |
> * solver: Optimizer type, example: `SGD`, `Adam`. |
|
|
27 |
> * **others**: Please refer to `torch.optim`. |
|
|
28 |
|
|
|
29 |
|
|
|
30 |
### scheduler_cfg |
|
|
31 |
* Learning rate scheduler |
|
|
32 |
> * Args |
|
|
33 |
> * scheduler : Learning rate scheduler, example: `MultiStepLR`. |
|
|
34 |
> * **others** : Please refer to `torch.optim.lr_scheduler`. |
|
|
35 |
---- |
|
|
36 |
### model_cfg |
|
|
37 |
* Model to be trained |
|
|
38 |
> * Args |
|
|
39 |
> * model : Model type, please refer to [Model Library](../opengait/modeling/models) for the supported values. |
|
|
40 |
> * **others** : Please refer to the [Training Configuration File of Corresponding Model](../configs). |
|
|
41 |
---- |
|
|
42 |
### evaluator_cfg |
|
|
43 |
* Evaluator configuration |
|
|
44 |
> * Args |
|
|
45 |
> * enable_float16: If `True`, enable the auto mixed precision mode. |
|
|
46 |
> * restore_ckpt_strict: If `True`, check whether the checkpoint is the same as the defined model. |
|
|
47 |
> * restore_hint: `int` value indicates the iteration number of restored checkpoint; `str` value indicates the path to restored checkpoint. |
|
|
48 |
> * save_name: The name of the experiment. |
|
|
49 |
> * eval_func: The function name of evaluation. For `CASIA-B`, choose `identification`. |
|
|
50 |
> * sampler: |
|
|
51 |
> - type: The name of sampler. Choose `InferenceSampler`. |
|
|
52 |
> - sample_type: In general, we use `all_ordered` to input all frames by its natural order, which makes sure the tests are consistent. |
|
|
53 |
> - batch_size: `int` values. |
|
|
54 |
> - **others**: Please refer to [data.sampler](../opengait/data/sampler.py) and [data.collate_fn](../opengait/data/collate_fn.py) |
|
|
55 |
> * transform: Support `BaseSilCuttingTransform`, `BaseSilTransform`. The difference between them is `BaseSilCuttingTransform` cut out the black pixels on both sides horizontally. |
|
|
56 |
> * metric: `euc` or `cos`, generally, `euc` performs better. |
|
|
57 |
|
|
|
58 |
---- |
|
|
59 |
### trainer_cfg |
|
|
60 |
* Trainer configuration |
|
|
61 |
> * Args |
|
|
62 |
> * restore_hint: `int` value indicates the iteration number of restored checkpoint; `str` value indicates the path to restored checkpoint. The option is often used to finetune on new dataset or restore the interrupted training process. |
|
|
63 |
> * fix_BN: If `True`, we fix the weight of all `BatchNorm` layers. |
|
|
64 |
> * log_iter: Log the information per `log_iter` iterations. |
|
|
65 |
> * save_iter: Save the checkpoint per `save_iter` iterations. |
|
|
66 |
> * with_test: If `True`, we test the model every `save_iter` iterations. A bit of performance impact.(*Disable in Default*) |
|
|
67 |
> * optimizer_reset: If `True` and `restore_hint!=0`, reset the optimizer while restoring the model. |
|
|
68 |
> * scheduler_reset: If `True` and `restore_hint!=0`, reset the scheduler while restoring the model. |
|
|
69 |
> * sync_BN: If `True`, applies Batch Normalization synchronously. |
|
|
70 |
> * total_iter: The total training iterations, `int` values. |
|
|
71 |
> * sampler: |
|
|
72 |
> - type: The name of sampler. Choose `TripletSampler`. |
|
|
73 |
> - sample_type: `[all, fixed, unfixed]` indicates the number of frames used to test, while `[unordered, ordered]` means whether input sequence by its natural order. Example: `fixed_unordered` means selecting fixed number of frames randomly. |
|
|
74 |
> - batch_size: *[P,K]* where `P` denotes the subjects in training batch while the `K` represents the sequences every subject owns. **Example**: |
|
|
75 |
> - 8 |
|
|
76 |
> - 16 |
|
|
77 |
> - **others**: Please refer to [data.sampler](../opengait/data/sampler.py) and [data.collate_fn](../opengait/data/collate_fn.py). |
|
|
78 |
> * **others**: Please refer to `evaluator_cfg`. |
|
|
79 |
--- |
|
|
80 |
**Note**: |
|
|
81 |
- All the config items will be merged into [default.yaml](../configs/default.yaml), and the current config is preferable. |
|
|
82 |
- The output directory, which includes the log, checkpoint and summary files, is depended on the defined `dataset_name`, `model` and `save_name` settings, like `output/${dataset_name}/${model}/${save_name}`. |
|
|
83 |
# Example |
|
|
84 |
|
|
|
85 |
```yaml |
|
|
86 |
data_cfg: |
|
|
87 |
dataset_name: CASIA-B |
|
|
88 |
dataset_root: your_path |
|
|
89 |
dataset_partition: ./datasets/CASIA-B/CASIA-B.json |
|
|
90 |
num_workers: 1 |
|
|
91 |
remove_no_gallery: false # Remove probe if no gallery for it |
|
|
92 |
test_dataset_name: CASIA-B |
|
|
93 |
|
|
|
94 |
evaluator_cfg: |
|
|
95 |
enable_float16: true |
|
|
96 |
restore_ckpt_strict: true |
|
|
97 |
restore_hint: 60000 |
|
|
98 |
save_name: Baseline |
|
|
99 |
eval_func: evaluate_indoor_dataset |
|
|
100 |
sampler: |
|
|
101 |
batch_shuffle: false |
|
|
102 |
batch_size: 16 |
|
|
103 |
sample_type: all_ordered # all indicates whole sequence used to test, while ordered means input sequence by its natural order; Other options: fixed_unordered |
|
|
104 |
frames_all_limit: 720 # limit the number of sampled frames to prevent out of memory |
|
|
105 |
metric: euc # cos |
|
|
106 |
transform: |
|
|
107 |
- type: BaseSilCuttingTransform |
|
|
108 |
img_w: 64 |
|
|
109 |
|
|
|
110 |
loss_cfg: |
|
|
111 |
- loss_term_weight: 1.0 |
|
|
112 |
margin: 0.2 |
|
|
113 |
type: TripletLoss |
|
|
114 |
log_prefix: triplet |
|
|
115 |
- loss_term_weight: 0.1 |
|
|
116 |
scale: 16 |
|
|
117 |
type: CrossEntropyLoss |
|
|
118 |
log_prefix: softmax |
|
|
119 |
log_accuracy: true |
|
|
120 |
|
|
|
121 |
model_cfg: |
|
|
122 |
model: Baseline |
|
|
123 |
backbone_cfg: |
|
|
124 |
in_channels: 1 |
|
|
125 |
layers_cfg: # Layers configuration for automatically model construction |
|
|
126 |
- BC-64 |
|
|
127 |
- BC-64 |
|
|
128 |
- M |
|
|
129 |
- BC-128 |
|
|
130 |
- BC-128 |
|
|
131 |
- M |
|
|
132 |
- BC-256 |
|
|
133 |
- BC-256 |
|
|
134 |
# - M |
|
|
135 |
# - BC-512 |
|
|
136 |
# - BC-512 |
|
|
137 |
type: Plain |
|
|
138 |
SeparateFCs: |
|
|
139 |
in_channels: 256 |
|
|
140 |
out_channels: 256 |
|
|
141 |
parts_num: 31 |
|
|
142 |
SeparateBNNecks: |
|
|
143 |
class_num: 74 |
|
|
144 |
in_channels: 256 |
|
|
145 |
parts_num: 31 |
|
|
146 |
bin_num: |
|
|
147 |
- 16 |
|
|
148 |
- 8 |
|
|
149 |
- 4 |
|
|
150 |
- 2 |
|
|
151 |
- 1 |
|
|
152 |
|
|
|
153 |
optimizer_cfg: |
|
|
154 |
lr: 0.1 |
|
|
155 |
momentum: 0.9 |
|
|
156 |
solver: SGD |
|
|
157 |
weight_decay: 0.0005 |
|
|
158 |
|
|
|
159 |
scheduler_cfg: |
|
|
160 |
gamma: 0.1 |
|
|
161 |
milestones: # Learning Rate Reduction at each milestones |
|
|
162 |
- 20000 |
|
|
163 |
- 40000 |
|
|
164 |
scheduler: MultiStepLR |
|
|
165 |
trainer_cfg: |
|
|
166 |
enable_float16: true # half_percesion float for memory reduction and speedup |
|
|
167 |
fix_layers: false |
|
|
168 |
log_iter: 100 |
|
|
169 |
restore_ckpt_strict: true |
|
|
170 |
restore_hint: 0 |
|
|
171 |
save_iter: 10000 |
|
|
172 |
save_name: Baseline |
|
|
173 |
sync_BN: true |
|
|
174 |
total_iter: 60000 |
|
|
175 |
sampler: |
|
|
176 |
batch_shuffle: true |
|
|
177 |
batch_size: |
|
|
178 |
- 8 # TripletSampler, batch_size[0] indicates Number of Identity |
|
|
179 |
- 16 # batch_size[1] indicates Samples sequqnce for each Identity |
|
|
180 |
frames_num_fixed: 30 # fixed frames number for training |
|
|
181 |
frames_num_max: 50 # max frames number for unfixed training |
|
|
182 |
frames_num_min: 25 # min frames number for unfixed traing |
|
|
183 |
sample_type: fixed_unordered # fixed control input frames number, unordered for controlling order of input tensor; Other options: unfixed_ordered or all_ordered |
|
|
184 |
type: TripletSampler |
|
|
185 |
transform: |
|
|
186 |
- type: BaseSilCuttingTransform |
|
|
187 |
img_w: 64 |
|
|
188 |
|
|
|
189 |
``` |