Diff of /docs/5.advanced_usages.md [000000] .. [fd9ef4]

Switch to side-by-side view

--- a
+++ b/docs/5.advanced_usages.md
@@ -0,0 +1,88 @@
+# Advanced Usages
+### Cross-Dataset Evalution
+> You can conduct cross-dataset evalution by just modifying several arguments in your [data_cfg](../configs/baseline/baseline.yaml#L1).
+>
+>  Take [baseline.yaml](../configs/baseline/baseline.yaml) as an example:
+> ```yaml
+> data_cfg:
+>   dataset_name: CASIA-B
+>   dataset_root:  your_path
+>   dataset_partition: ./datasets/CASIA-B/CASIA-B.json
+>   num_workers: 1
+>   remove_no_gallery: false # Remove probe if no gallery for it
+>   test_dataset_name: CASIA-B
+> ```
+> Now, suppose we get the model trained on [CASIA-B](http://www.cbsr.ia.ac.cn/english/Gait%20Databases.asp), and then we want to test it on [OUMVLP](http://www.am.sanken.osaka-u.ac.jp/BiometricDB/GaitMVLP.html).
+> 
+> We should alter the `dataset_root`, `dataset_partition` and `test_dataset_name`, just like:
+> ```yaml
+> data_cfg:
+>   dataset_name: CASIA-B
+>   dataset_root:  your_OUMVLP_path
+>   dataset_partition: ./datasets/OUMVLP/OUMVLP.json
+>   num_workers: 1
+>   remove_no_gallery: false # Remove probe if no gallery for it
+>   test_dataset_name: OUMVLP
+> ```
+---
+>
+<!-- ### Identification Function
+> Sometime, your test dataset may be neither the popular [CASIA-B](http://www.cbsr.ia.ac.cn/english/Gait%20Databases.asp) nor the largest [OUMVLP](http://www.am.sanken.osaka-u.ac.jp/BiometricDB/GaitMVLP.html). Meanwhile, you need to customize a special identification function to fit your dataset. 
+> 
+> * If your path structure is similar to [CASIA-B](http://www.cbsr.ia.ac.cn/english/Gait%20Databases.asp) (the 3-flod style: `id-type-view`), we recommand you to  -->
+
+### Data Augmentation
+> In OpenGait, there is a basic transform class almost called by all the models, this is [BaseSilCuttingTransform](../opengait/data/transform.py#L20), which is used to cut the input silhouettes.
+>
+> Accordingly, by referring to this implementation, you can easily customize the data agumentation in just two steps:
+> * *Step1*: Define the transform function or class in [transform.py](../opengait/data/transform.py), and make sure it callable. The style of [torchvision.transforms](https://pytorch.org/vision/stable/_modules/torchvision/transforms/transforms.html) is recommanded, and following shows a demo;
+>> ```python
+>> import torchvision.transforms as T
+>> class demo1():
+>>     def __init__(self, args):
+>>         pass
+>>     
+>>     def __call__(self, seqs):
+>>         '''
+>>             seqs: with dimension of [sequence, height, width]
+>>         '''
+>>         pass
+>>         return seqs
+>> 
+>> class demo2():
+>>     def __init__(self, args):
+>>         pass
+>>     
+>>     def __call__(self, seqs):
+>>         pass
+>>         return seqs
+>> 
+>>  def TransformDemo(base_args, demo1_args, demo2_args):
+>>     transform = T.Compose([
+>>         BaseSilCuttingTransform(**base_args), 
+>>         demo1(args=demo1_args), 
+>>         demo2(args=demo2_args)
+>>     ])
+>>     return transform
+>> ```
+> * *Step2*: Reset the [`transform`](../configs/baseline.yaml#L100) arguments in your config file:
+>> ```yaml
+>> transform:
+>> - type: TransformDemo
+>>     base_args: {'img_w': 64}
+>>     demo1_args: false
+>>     demo2_args: false
+>> ```
+
+### Visualization
+> To learn how does the model work, sometimes, you need to visualize the intermediate result.
+> 
+> For this purpose, we have defined a built-in instantiation of [`torch.utils.tensorboard.SummaryWriter`](https://pytorch.org/docs/stable/tensorboard.html), that is [`self.msg_mgr.writer`](../opengait/utils/msg_manager.py#L24), to make sure you can log the middle information everywhere you want.
+> 
+> Demo: if we want to visualize the output feature of [baseline's backbone](../opengait/modeling/models/baseline.py#L27), we could just insert the following codes at [baseline.py#L28](../opengait/modeling/models/baseline.py#L28):
+>> ```python
+>> summary_writer = self.msg_mgr.writer
+>> if torch.distributed.get_rank() == 0 and self.training and self.iteration % 100==0:
+>>     summary_writer.add_video('outs', outs.mean(2).unsqueeze(2), self.iteration)
+>> ```
+> Note that this example requires the [`moviepy`](https://github.com/Zulko/moviepy) package, and hence you should run `pip install moviepy` first.
\ No newline at end of file