Diff of /docs/5.advanced_usages.md [000000] .. [fd9ef4]

Switch to unified view

a b/docs/5.advanced_usages.md
1
# Advanced Usages
2
### Cross-Dataset Evalution
3
> You can conduct cross-dataset evalution by just modifying several arguments in your [data_cfg](../configs/baseline/baseline.yaml#L1).
4
>
5
>  Take [baseline.yaml](../configs/baseline/baseline.yaml) as an example:
6
> ```yaml
7
> data_cfg:
8
>   dataset_name: CASIA-B
9
>   dataset_root:  your_path
10
>   dataset_partition: ./datasets/CASIA-B/CASIA-B.json
11
>   num_workers: 1
12
>   remove_no_gallery: false # Remove probe if no gallery for it
13
>   test_dataset_name: CASIA-B
14
> ```
15
> Now, suppose we get the model trained on [CASIA-B](http://www.cbsr.ia.ac.cn/english/Gait%20Databases.asp), and then we want to test it on [OUMVLP](http://www.am.sanken.osaka-u.ac.jp/BiometricDB/GaitMVLP.html).
16
> 
17
> We should alter the `dataset_root`, `dataset_partition` and `test_dataset_name`, just like:
18
> ```yaml
19
> data_cfg:
20
>   dataset_name: CASIA-B
21
>   dataset_root:  your_OUMVLP_path
22
>   dataset_partition: ./datasets/OUMVLP/OUMVLP.json
23
>   num_workers: 1
24
>   remove_no_gallery: false # Remove probe if no gallery for it
25
>   test_dataset_name: OUMVLP
26
> ```
27
---
28
>
29
<!-- ### Identification Function
30
> Sometime, your test dataset may be neither the popular [CASIA-B](http://www.cbsr.ia.ac.cn/english/Gait%20Databases.asp) nor the largest [OUMVLP](http://www.am.sanken.osaka-u.ac.jp/BiometricDB/GaitMVLP.html). Meanwhile, you need to customize a special identification function to fit your dataset. 
31
> 
32
> * If your path structure is similar to [CASIA-B](http://www.cbsr.ia.ac.cn/english/Gait%20Databases.asp) (the 3-flod style: `id-type-view`), we recommand you to  -->
33
34
### Data Augmentation
35
> In OpenGait, there is a basic transform class almost called by all the models, this is [BaseSilCuttingTransform](../opengait/data/transform.py#L20), which is used to cut the input silhouettes.
36
>
37
> Accordingly, by referring to this implementation, you can easily customize the data agumentation in just two steps:
38
> * *Step1*: Define the transform function or class in [transform.py](../opengait/data/transform.py), and make sure it callable. The style of [torchvision.transforms](https://pytorch.org/vision/stable/_modules/torchvision/transforms/transforms.html) is recommanded, and following shows a demo;
39
>> ```python
40
>> import torchvision.transforms as T
41
>> class demo1():
42
>>     def __init__(self, args):
43
>>         pass
44
>>     
45
>>     def __call__(self, seqs):
46
>>         '''
47
>>             seqs: with dimension of [sequence, height, width]
48
>>         '''
49
>>         pass
50
>>         return seqs
51
>> 
52
>> class demo2():
53
>>     def __init__(self, args):
54
>>         pass
55
>>     
56
>>     def __call__(self, seqs):
57
>>         pass
58
>>         return seqs
59
>> 
60
>>  def TransformDemo(base_args, demo1_args, demo2_args):
61
>>     transform = T.Compose([
62
>>         BaseSilCuttingTransform(**base_args), 
63
>>         demo1(args=demo1_args), 
64
>>         demo2(args=demo2_args)
65
>>     ])
66
>>     return transform
67
>> ```
68
> * *Step2*: Reset the [`transform`](../configs/baseline.yaml#L100) arguments in your config file:
69
>> ```yaml
70
>> transform:
71
>> - type: TransformDemo
72
>>     base_args: {'img_w': 64}
73
>>     demo1_args: false
74
>>     demo2_args: false
75
>> ```
76
77
### Visualization
78
> To learn how does the model work, sometimes, you need to visualize the intermediate result.
79
> 
80
> For this purpose, we have defined a built-in instantiation of [`torch.utils.tensorboard.SummaryWriter`](https://pytorch.org/docs/stable/tensorboard.html), that is [`self.msg_mgr.writer`](../opengait/utils/msg_manager.py#L24), to make sure you can log the middle information everywhere you want.
81
> 
82
> Demo: if we want to visualize the output feature of [baseline's backbone](../opengait/modeling/models/baseline.py#L27), we could just insert the following codes at [baseline.py#L28](../opengait/modeling/models/baseline.py#L28):
83
>> ```python
84
>> summary_writer = self.msg_mgr.writer
85
>> if torch.distributed.get_rank() == 0 and self.training and self.iteration % 100==0:
86
>>     summary_writer.add_video('outs', outs.mean(2).unsqueeze(2), self.iteration)
87
>> ```
88
> Note that this example requires the [`moviepy`](https://github.com/Zulko/moviepy) package, and hence you should run `pip install moviepy` first.