[fd9ef4]: / docs / 5.advanced_usages.md

Download this file

88 lines (86 with data), 4.2 kB

Advanced Usages

Cross-Dataset Evalution

You can conduct cross-dataset evalution by just modifying several arguments in your data_cfg.

Take baseline.yaml as an example:
yaml data_cfg: dataset_name: CASIA-B dataset_root: your_path dataset_partition: ./datasets/CASIA-B/CASIA-B.json num_workers: 1 remove_no_gallery: false # Remove probe if no gallery for it test_dataset_name: CASIA-B
Now, suppose we get the model trained on CASIA-B, and then we want to test it on OUMVLP.

We should alter the dataset_root, dataset_partition and test_dataset_name, just like:
yaml data_cfg: dataset_name: CASIA-B dataset_root: your_OUMVLP_path dataset_partition: ./datasets/OUMVLP/OUMVLP.json num_workers: 1 remove_no_gallery: false # Remove probe if no gallery for it test_dataset_name: OUMVLP


Data Augmentation

In OpenGait, there is a basic transform class almost called by all the models, this is BaseSilCuttingTransform, which is used to cut the input silhouettes.

Accordingly, by referring to this implementation, you can easily customize the data agumentation in just two steps:
* Step1: Define the transform function or class in transform.py, and make sure it callable. The style of torchvision.transforms is recommanded, and following shows a demo;

```python
import torchvision.transforms as T
class demo1():
def init(self, args):
pass

def __call__(self, seqs):
    '''
        seqs: with dimension of [sequence, height, width]
    '''
    pass
    return seqs

class demo2():
def init(self, args):
pass

def __call__(self, seqs):
    pass
    return seqs

def TransformDemo(base_args, demo1_args, demo2_args):
transform = T.Compose(
BaseSilCuttingTransform(**base_args),
demo1(args=demo1_args),
demo2(args=demo2_args)
)
return transform
* *Step2*: Reset the [`transform`](../configs/baseline.yaml#L100) arguments in your config file:yaml
transform:
- type: TransformDemo
base_args: {'img_w': 64}
demo1_args: false
demo2_args: false
```

Visualization

To learn how does the model work, sometimes, you need to visualize the intermediate result.

For this purpose, we have defined a built-in instantiation of torch.utils.tensorboard.SummaryWriter, that is self.msg_mgr.writer, to make sure you can log the middle information everywhere you want.

Demo: if we want to visualize the output feature of baseline's backbone, we could just insert the following codes at baseline.py#L28:

python summary_writer = self.msg_mgr.writer if torch.distributed.get_rank() == 0 and self.training and self.iteration % 100==0: summary_writer.add_video('outs', outs.mean(2).unsqueeze(2), self.iteration)
Note that this example requires the moviepy package, and hence you should run pip install moviepy first.