|
a |
|
b/README.md |
|
|
1 |
# GaitSet |
|
|
2 |
|
|
|
3 |
[-blue.svg)](https://github.com/996icu/996.ICU/blob/master/LICENSE) |
|
|
4 |
[](https://996.icu) |
|
|
5 |
|
|
|
6 |
GaitSet is a **flexible**, **effective** and **fast** network for cross-view gait recognition. The [paper](https://ieeexplore.ieee.org/document/9351667) has been published on IEEE TPAMI. |
|
|
7 |
|
|
|
8 |
#### Flexible |
|
|
9 |
The input of GaitSet is a set of silhouettes. |
|
|
10 |
|
|
|
11 |
- There are **NOT ANY constrains** on an input, |
|
|
12 |
which means it can contain **any number** of **non-consecutive** silhouettes filmed under **different viewpoints** |
|
|
13 |
with **different walking conditions**. |
|
|
14 |
|
|
|
15 |
- As the input is a set, the **permutation** of the elements in the input |
|
|
16 |
will **NOT change** the output at all. |
|
|
17 |
|
|
|
18 |
#### Effective |
|
|
19 |
It achieves **Rank@1=95.0%** on [CASIA-B](http://www.cbsr.ia.ac.cn/english/Gait%20Databases.asp) |
|
|
20 |
and **Rank@1=87.1%** on [OU-MVLP](http://www.am.sanken.osaka-u.ac.jp/BiometricDB/GaitMVLP.html), |
|
|
21 |
excluding identical-view cases. |
|
|
22 |
|
|
|
23 |
#### Fast |
|
|
24 |
With 8 NVIDIA 1080TI GPUs, it only takes **7 minutes** to conduct an evaluation on |
|
|
25 |
[OU-MVLP](http://www.am.sanken.osaka-u.ac.jp/BiometricDB/GaitMVLP.html) which contains 133,780 sequences |
|
|
26 |
and average 70 frames per sequence. |
|
|
27 |
|
|
|
28 |
## What's new |
|
|
29 |
The code and checkpoint for OUMVLP dataset have been released. |
|
|
30 |
See [OUMVLP](#oumvlp) for details. |
|
|
31 |
|
|
|
32 |
## Prerequisites |
|
|
33 |
|
|
|
34 |
- Python 3.6 |
|
|
35 |
- PyTorch 0.4+ |
|
|
36 |
- GPU |
|
|
37 |
|
|
|
38 |
|
|
|
39 |
## Getting started |
|
|
40 |
### Installation |
|
|
41 |
|
|
|
42 |
- (Not necessary) Install [Anaconda3](https://www.anaconda.com/download/) |
|
|
43 |
- Install [CUDA 9.0](https://developer.nvidia.com/cuda-90-download-archive) |
|
|
44 |
- install [cuDNN7.0](https://developer.nvidia.com/cudnn) |
|
|
45 |
- Install [PyTorch](http://pytorch.org/) |
|
|
46 |
|
|
|
47 |
Noted that our code is tested based on [PyTorch 0.4](http://pytorch.org/) |
|
|
48 |
|
|
|
49 |
### Dataset & Preparation |
|
|
50 |
Download [CASIA-B Dataset](http://www.cbsr.ia.ac.cn/english/Gait%20Databases.asp) |
|
|
51 |
|
|
|
52 |
**!!! ATTENTION !!! ATTENTION !!! ATTENTION !!!** |
|
|
53 |
|
|
|
54 |
Before training or test, please make sure you have prepared the dataset |
|
|
55 |
by this two steps: |
|
|
56 |
- **Step1:** Organize the directory as: |
|
|
57 |
`your_dataset_path/subject_ids/walking_conditions/views`. |
|
|
58 |
E.g. `CASIA-B/001/nm-01/000/`. |
|
|
59 |
- **Step2:** Cut and align the raw silhouettes with `pretreatment.py`. |
|
|
60 |
(See [pretreatment](#pretreatment) for details.) |
|
|
61 |
Welcome to try different ways of pretreatment but note that |
|
|
62 |
the silhouettes after pretreatment **MUST have a size of 64x64**. |
|
|
63 |
|
|
|
64 |
Futhermore, you also can test our code on [OU-MVLP Dataset](http://www.am.sanken.osaka-u.ac.jp/BiometricDB/GaitMVLP.html). |
|
|
65 |
The number of channels and the training batchsize is slightly different for this dataset. |
|
|
66 |
For more detail, please refer to [our paper](https://arxiv.org/abs/1811.06186). |
|
|
67 |
|
|
|
68 |
#### Pretreatment |
|
|
69 |
`pretreatment.py` uses the alignment method in |
|
|
70 |
[this paper](https://ipsjcva.springeropen.com/articles/10.1186/s41074-018-0039-6). |
|
|
71 |
Pretreatment your dataset by |
|
|
72 |
``` |
|
|
73 |
python pretreatment.py --input_path='root_path_of_raw_dataset' --output_path='root_path_for_output' |
|
|
74 |
``` |
|
|
75 |
- `--input_path` **(NECESSARY)** Root path of raw dataset. |
|
|
76 |
- `--output_path` **(NECESSARY)** Root path for output. |
|
|
77 |
- `--log_file` Log file path. #Default: './pretreatment.log' |
|
|
78 |
- `--log` If set as True, all logs will be saved. |
|
|
79 |
Otherwise, only warnings and errors will be saved. #Default: False |
|
|
80 |
- `--worker_num` How many subprocesses to use for data pretreatment. Default: 1 |
|
|
81 |
|
|
|
82 |
### Configuration |
|
|
83 |
|
|
|
84 |
In `config.py`, you might want to change the following settings: |
|
|
85 |
- `dataset_path` **(NECESSARY)** root path of the dataset |
|
|
86 |
(for the above example, it is "gaitdata") |
|
|
87 |
- `WORK_PATH` path to save/load checkpoints |
|
|
88 |
- `CUDA_VISIBLE_DEVICES` indices of GPUs |
|
|
89 |
|
|
|
90 |
### Train |
|
|
91 |
Train a model by |
|
|
92 |
```bash |
|
|
93 |
python train.py |
|
|
94 |
``` |
|
|
95 |
- `--cache` if set as TRUE all the training data will be loaded at once before the training start. |
|
|
96 |
This will accelerate the training. |
|
|
97 |
**Note that** if this arg is set as FALSE, samples will NOT be kept in the memory |
|
|
98 |
even they have been used in the former iterations. #Default: TRUE |
|
|
99 |
|
|
|
100 |
### Evaluation |
|
|
101 |
Evaluate the trained model by |
|
|
102 |
```bash |
|
|
103 |
python test.py |
|
|
104 |
``` |
|
|
105 |
- `--iter` iteration of the checkpoint to load. #Default: 80000 |
|
|
106 |
- `--batch_size` batch size of the parallel test. #Default: 1 |
|
|
107 |
- `--cache` if set as TRUE all the test data will be loaded at once before the transforming start. |
|
|
108 |
This might accelerate the testing. #Default: FALSE |
|
|
109 |
|
|
|
110 |
It will output Rank@1 of all three walking conditions. |
|
|
111 |
Note that the test is **parallelizable**. |
|
|
112 |
To conduct a faster evaluation, you could use `--batch_size` to change the batch size for test. |
|
|
113 |
|
|
|
114 |
#### OUMVLP |
|
|
115 |
Since the huge differences between OUMVLP and CASIA-B, the network setting on OUMVLP is slightly different. |
|
|
116 |
- The alternated network's code can be found at `./work/OUMVLP_network`. Use them to replace the corresponding files in `./model/network`. |
|
|
117 |
- The checkpoint can be found [here](https://1drv.ms/u/s!AurT2TsSKdxQuWN8drzIv_phTR5m?e=Gfbl3m). |
|
|
118 |
- In `./config.py`, modify `'batch_size': (8, 16)` into `'batch_size': (32,16)`. |
|
|
119 |
- Prepare your OUMVLP dataset according to the instructions in [Dataset & Preparation](#dataset--preparation). |
|
|
120 |
|
|
|
121 |
## To Do List |
|
|
122 |
- Transformation: The script for transforming a set of silhouettes into a discriminative representation. |
|
|
123 |
|
|
|
124 |
## Authors & Contributors |
|
|
125 |
GaitSet is authored by |
|
|
126 |
[Hanqing Chao](https://www.linkedin.com/in/hanqing-chao-9aa42412b/), |
|
|
127 |
[Yiwei He](https://www.linkedin.com/in/yiwei-he-4a6a6bbb/), |
|
|
128 |
[Junping Zhang](http://www.pami.fudan.edu.cn/~jpzhang/) |
|
|
129 |
and JianFeng Feng from Fudan Universiy. |
|
|
130 |
[Junping Zhang](http://www.pami.fudan.edu.cn/~jpzhang/) |
|
|
131 |
is the corresponding author. |
|
|
132 |
The code is developed by |
|
|
133 |
[Hanqing Chao](https://www.linkedin.com/in/hanqing-chao-9aa42412b/) |
|
|
134 |
and [Yiwei He](https://www.linkedin.com/in/yiwei-he-4a6a6bbb/). |
|
|
135 |
Currently, it is being maintained by |
|
|
136 |
[Hanqing Chao](https://www.linkedin.com/in/hanqing-chao-9aa42412b/) |
|
|
137 |
and Kun Wang. |
|
|
138 |
|
|
|
139 |
|
|
|
140 |
## Citation |
|
|
141 |
Please cite these papers in your publications if it helps your research: |
|
|
142 |
``` |
|
|
143 |
@ARTICLE{chao2019gaitset, |
|
|
144 |
author={Chao, Hanqing and Wang, Kun and He, Yiwei and Zhang, Junping and Feng, Jianfeng}, |
|
|
145 |
journal={IEEE Transactions on Pattern Analysis and Machine Intelligence}, |
|
|
146 |
title={GaitSet: Cross-view Gait Recognition through Utilizing Gait as a Deep Set}, |
|
|
147 |
year={2021}, |
|
|
148 |
pages={1-1}, |
|
|
149 |
doi={10.1109/TPAMI.2021.3057879}} |
|
|
150 |
``` |
|
|
151 |
Link to paper: |
|
|
152 |
- [GaitSet: Cross-view Gait Recognition through Utilizing Gait as a Deep Set](https://ieeexplore.ieee.org/document/9351667) |
|
|
153 |
|
|
|
154 |
|
|
|
155 |
## License |
|
|
156 |
GaitSet is freely available for free non-commercial use, and may be redistributed under these conditions. |
|
|
157 |
For commercial queries, contact [Junping Zhang](http://www.pami.fudan.edu.cn/~jpzhang/). |