You can conduct cross-dataset evalution by just modifying several arguments in your data_cfg.
Take baseline.yaml as an example:
yaml data_cfg: dataset_name: CASIA-B dataset_root: your_path dataset_partition: ./datasets/CASIA-B/CASIA-B.json num_workers: 1 remove_no_gallery: false # Remove probe if no gallery for it test_dataset_name: CASIA-B
Now, suppose we get the model trained on CASIA-B, and then we want to test it on OUMVLP.We should alter the
dataset_root
,dataset_partition
andtest_dataset_name
, just like:
yaml data_cfg: dataset_name: CASIA-B dataset_root: your_OUMVLP_path dataset_partition: ./datasets/OUMVLP/OUMVLP.json num_workers: 1 remove_no_gallery: false # Remove probe if no gallery for it test_dataset_name: OUMVLP
In OpenGait, there is a basic transform class almost called by all the models, this is BaseSilCuttingTransform, which is used to cut the input silhouettes.
Accordingly, by referring to this implementation, you can easily customize the data agumentation in just two steps:
* Step1: Define the transform function or class in transform.py, and make sure it callable. The style of torchvision.transforms is recommanded, and following shows a demo;```python
import torchvision.transforms as T
class demo1():
def init(self, args):
passdef __call__(self, seqs): ''' seqs: with dimension of [sequence, height, width] ''' pass return seqs
class demo2():
def init(self, args):
passdef __call__(self, seqs): pass return seqs
def TransformDemo(base_args, demo1_args, demo2_args):
transform = T.Compose(
BaseSilCuttingTransform(**base_args),
demo1(args=demo1_args),
demo2(args=demo2_args)
)
return transform
* *Step2*: Reset the [`transform`](../configs/baseline.yaml#L100) arguments in your config file:
yaml
transform:
- type: TransformDemo
base_args: {'img_w': 64}
demo1_args: false
demo2_args: false
```
To learn how does the model work, sometimes, you need to visualize the intermediate result.
For this purpose, we have defined a built-in instantiation of
torch.utils.tensorboard.SummaryWriter
, that isself.msg_mgr.writer
, to make sure you can log the middle information everywhere you want.Demo: if we want to visualize the output feature of baseline's backbone, we could just insert the following codes at baseline.py#L28:
python summary_writer = self.msg_mgr.writer if torch.distributed.get_rank() == 0 and self.training and self.iteration % 100==0: summary_writer.add_video('outs', outs.mean(2).unsqueeze(2), self.iteration)
Note that this example requires themoviepy
package, and hence you should runpip install moviepy
first.