GaitSet is a flexible, effective and fast network for cross-view gait recognition. The paper has been published on IEEE TPAMI.
The input of GaitSet is a set of silhouettes.
There are NOT ANY constrains on an input,
which means it can contain any number of non-consecutive silhouettes filmed under different viewpoints
with different walking conditions.
As the input is a set, the permutation of the elements in the input
will NOT change the output at all.
It achieves Rank@1=95.0% on CASIA-B
and Rank@1=87.1% on OU-MVLP,
excluding identical-view cases.
With 8 NVIDIA 1080TI GPUs, it only takes 7 minutes to conduct an evaluation on
OU-MVLP which contains 133,780 sequences
and average 70 frames per sequence.
The code and checkpoint for OUMVLP dataset have been released.
See OUMVLP for details.
Noted that our code is tested based on PyTorch 0.4
Download CASIA-B Dataset
!!! ATTENTION !!! ATTENTION !!! ATTENTION !!!
Before training or test, please make sure you have prepared the dataset
by this two steps:
- Step1: Organize the directory as:
your_dataset_path/subject_ids/walking_conditions/views
.
E.g. CASIA-B/001/nm-01/000/
.
- Step2: Cut and align the raw silhouettes with pretreatment.py
.
(See pretreatment for details.)
Welcome to try different ways of pretreatment but note that
the silhouettes after pretreatment MUST have a size of 64x64.
Futhermore, you also can test our code on OU-MVLP Dataset.
The number of channels and the training batchsize is slightly different for this dataset.
For more detail, please refer to our paper.
pretreatment.py
uses the alignment method in
this paper.
Pretreatment your dataset by
python pretreatment.py --input_path='root_path_of_raw_dataset' --output_path='root_path_for_output'
--input_path
(NECESSARY) Root path of raw dataset.--output_path
(NECESSARY) Root path for output.--log_file
Log file path. #Default: './pretreatment.log'--log
If set as True, all logs will be saved. --worker_num
How many subprocesses to use for data pretreatment. Default: 1In config.py
, you might want to change the following settings:
- dataset_path
(NECESSARY) root path of the dataset
(for the above example, it is "gaitdata")
- WORK_PATH
path to save/load checkpoints
- CUDA_VISIBLE_DEVICES
indices of GPUs
Train a model by
python train.py
--cache
if set as TRUE all the training data will be loaded at once before the training start.Evaluate the trained model by
python test.py
--iter
iteration of the checkpoint to load. #Default: 80000--batch_size
batch size of the parallel test. #Default: 1--cache
if set as TRUE all the test data will be loaded at once before the transforming start.It will output Rank@1 of all three walking conditions.
Note that the test is parallelizable.
To conduct a faster evaluation, you could use --batch_size
to change the batch size for test.
Since the huge differences between OUMVLP and CASIA-B, the network setting on OUMVLP is slightly different.
- The alternated network's code can be found at ./work/OUMVLP_network
. Use them to replace the corresponding files in ./model/network
.
- The checkpoint can be found here.
- In ./config.py
, modify 'batch_size': (8, 16)
into 'batch_size': (32,16)
.
- Prepare your OUMVLP dataset according to the instructions in Dataset & Preparation.
GaitSet is authored by
Hanqing Chao,
Yiwei He,
Junping Zhang
and JianFeng Feng from Fudan Universiy.
Junping Zhang
is the corresponding author.
The code is developed by
Hanqing Chao
and Yiwei He.
Currently, it is being maintained by
Hanqing Chao
and Kun Wang.
Please cite these papers in your publications if it helps your research:
@ARTICLE{chao2019gaitset,
author={Chao, Hanqing and Wang, Kun and He, Yiwei and Zhang, Junping and Feng, Jianfeng},
journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
title={GaitSet: Cross-view Gait Recognition through Utilizing Gait as a Deep Set},
year={2021},
pages={1-1},
doi={10.1109/TPAMI.2021.3057879}}
Link to paper:
- GaitSet: Cross-view Gait Recognition through Utilizing Gait as a Deep Set
GaitSet is freely available for free non-commercial use, and may be redistributed under these conditions.
For commercial queries, contact Junping Zhang.