|
a |
|
b/README.md |
|
|
1 |
<img src="./assets/logo2.png" width = "320" height = "110" alt="logo" /> |
|
|
2 |
|
|
|
3 |
<div align="center"><img src="./assets/nm.gif" width = "100" height = "100" alt="nm" /><img src="./assets/bg.gif" width = "100" height = "100" alt="bg" /><img src="./assets/cl.gif" width = "100" height = "100" alt="cl" /></div> |
|
|
4 |
|
|
|
5 |
------------------------------------------ |
|
|
6 |
<!-- 📣📣📣 **[*GaitLU-1M*](https://ieeexplore.ieee.org/document/10242019) relseased, pls checking the [tutorial](datasets/GaitLU-1M/README.md).** 📣📣📣 |
|
|
7 |
📣📣📣 **[*SUSTech1K*](https://lidargait.github.io) relseased, pls checking the [tutorial](datasets/SUSTech1K/README.md).** 📣📣📣 |
|
|
8 |
|
|
|
9 |
🎉🎉🎉 **[*OpenGait*](https://openaccess.thecvf.com/content/CVPR2023/papers/Fan_OpenGait_Revisiting_Gait_Recognition_Towards_Better_Practicality_CVPR_2023_paper.pdf) has been accpected by CVPR2023 as a highlight paper!** 🎉🎉🎉 --> |
|
|
10 |
|
|
|
11 |
OpenGait is a flexible and extensible gait analysis project provided by the [Shiqi Yu Group](https://faculty.sustech.edu.cn/yusq/) and supported in part by [WATRIX.AI](http://www.watrix.ai). |
|
|
12 |
The corresponding [paper](https://openaccess.thecvf.com/content/CVPR2023/papers/Fan_OpenGait_Revisiting_Gait_Recognition_Towards_Better_Practicality_CVPR_2023_paper.pdf) has been accepted by CVPR2023 as a highlight paper. |
|
|
13 |
|
|
|
14 |
## What's New |
|
|
15 |
- **[Feb 2025]** Chao successfully defended his Ph.D. thesis in Oct. 2024🎉🎉🎉 You can access the full text in [*Chao's Thesis in English*](https://www.researchgate.net/publication/388768400_Gait_Representation_Learning_and_Recognition?_sg%5B0%5D=qaGVpS8gKWPyR7olHoFd4bCs40AZdJzaM96P3TSnxrpiP9zCIUTxzeEq8YhQOlE4WemB7iMF2fHvcJFAYHTlJhTIB2J6faVa5s-xcQVj.4112nauMM4MWUNSyUa9eMeF0MEeplptpFOgb5kSgIk3lMcfPK6TdPX1bW1y_bKSdbwXuBf29GloRsVwBdexhug&_tp=eyJjb250ZXh0Ijp7ImZpcnN0UGFnZSI6ImhvbWUiLCJwYWdlIjoicHJvZmlsZSIsInByZXZpb3VzUGFnZSI6InByb2ZpbGUiLCJwb3NpdGlvbiI6InBhZ2VDb250ZW50In19) or [*樊超的学位论文(中文版)*](https://www.researchgate.net/publication/388768605_butaitezhengxuexiyushibiesuanfayanjiu). |
|
|
16 |
- **[Dec 2024]** The multimodal [MultiGait++](https://arxiv.org/pdf/2412.11495) has been accepted to AAAI2025🎉 Congratulations to [Dongyang](https://scholar.google.com.hk/citations?user=1xA5KxAAAAAJ)! This is his FIRST paper! |
|
|
17 |
- **[Jun 2024]** |
|
|
18 |
The first large-scale gait-based scoliosis screening benchmark [ScoNet](https://zhouzi180.github.io/Scoliosis1K) is accepted to MICCAI2024🎉 Congratulations to [Zirui](https://zhouzi180.github.io)! This is his FIRST paper! The code is released [here](opengait/modeling/models/sconet.py), and you can refer to [project homepage](https://zhouzi180.github.io/Scoliosis1K/) for details. |
|
|
19 |
- **[May 2024]** |
|
|
20 |
The code of Large Vision Model based method [BigGait](https://arxiv.org/pdf/2402.19122) is available at [here](opengait/modeling/models/BigGait.py). [CCPG's checkpoints](https://huggingface.co/opengait/OpenGait). |
|
|
21 |
- **[Apr 2024]** |
|
|
22 |
Our team's latest checkpoints for projects such as DeepGaitv2, SkeletonGait, SkeletonGait++, and SwinGait will be released on [Hugging Face](https://huggingface.co/opengait/OpenGait). Additionally, previously released checkpoints will also be gradually made available on it. |
|
|
23 |
- **[Mar 2024]** [Chao](https://chaofan996.github.io) gives a talk about 'Progress in Gait Recognition'. The [video](https://event.baai.ac.cn/activities/768) and [slides](https://github.com/ChaoFan996/ChaoFan996.github.io/blob/main/240315-Progress%20in%20Gait%20Recognition.pdf) are both available😊 |
|
|
24 |
- **[Mar 2024]** The code of [SkeletonGait++](https://arxiv.org/pdf/2311.13444.pdf) is released [here](opengait/modeling/models/skeletongait%2B%2B.py), and you can refer to [readme](configs/skeletongait) for details. |
|
|
25 |
- **[Mar 2024]** [BigGait](https://arxiv.org/pdf/2402.19122.pdf) has been accepted to CVPR2024🎉 Congratulations to [Dingqiang](https://bugjudger.github.io)! This is his FIRST paper! |
|
|
26 |
- [Jan 2024] The code of transfomer-based [SwinGait](https://arxiv.org/pdf/2303.03301.pdf) is available at [here](opengait/modeling/models/swingait.py). |
|
|
27 |
<!--- [Dec 2023] A new state-of-the-art baseline, i.e., [DeepGaitV2](https://arxiv.org/pdf/2303.03301.pdf), is available at [here](opengait/modeling/models/deepgaitv2.py)! --> |
|
|
28 |
<!-- - [Nov 2023] The first million-level unlabeled gait dataset, i.e., [GaitLU-1M](https://ieeexplore.ieee.org/document/10242019), is released and supported in [datasets/GaitLU-1M](datasets/GaitLU-1M/README.md). |
|
|
29 |
- [Oct 2023] Several representative pose-based methods are supported in [opengait/modeling/models](./opengait/modeling/models). This feature is mainly inherited from [FastPoseGait](https://github.com/BNU-IVC/FastPoseGait). Many thanks to the contributors😊. |
|
|
30 |
- [July 2023] [CCPG](https://github.com/BNU-IVC/CCPG) is supported in [datasets/CCPG](./datasets/CCPG). --> |
|
|
31 |
<!-- - - - [July 2023] [SUSTech1K](https://lidargait.github.io) is released and supported in [datasets/SUSTech1K](./datasets/SUSTech1K). |
|
|
32 |
[May 2023] A real gait recognition system [All-in-One-Gait](https://github.com/jdyjjj/All-in-One-Gait) provided by [Dongyang Jin](https://github.com/jdyjjj) is available. |
|
|
33 |
[Apr 2023] [CASIA-E](datasets/CASIA-E/README.md) is supported by OpenGait. |
|
|
34 |
- [Feb 2023] [HID 2023 competition](https://hid2023.iapr-tc4.org/) is open, welcome to participate. Additionally, the tutorial for the competition has been updated in [datasets/HID/](./datasets/HID). |
|
|
35 |
- [Dec 2022] Dataset [Gait3D](https://github.com/Gait3D/Gait3D-Benchmark) is supported in [datasets/Gait3D](./datasets/Gait3D). |
|
|
36 |
- [Mar 2022] Dataset [GREW](https://www.grew-benchmark.org) is supported in [datasets/GREW](./datasets/GREW). --> |
|
|
37 |
|
|
|
38 |
## Our Works |
|
|
39 |
- [**Chao's Thesis**] Gait Representation Learning and Recognition, [Chinese Original](https://www.researchgate.net/publication/388768605_butaitezhengxuexiyushibiesuanfayanjiu) and [English Translation](https://www.academia.edu/127496287/Gait_Representation_Learning_and_Recognition). |
|
|
40 |
- [**AAAI'25**] Exploring More from Multiple Gait Modalities for Human Identification, [*Paper*](https://arxiv.org/pdf/2412.11495) and [*MultiGait++ Code*](opengait/modeling/models/multigait++.py). |
|
|
41 |
- [**TBIOM'24**] A Comprehensive Survey on Deep Gait Recognition: Algorithms, Datasets, and Challenges, [*Survey Paper*](https://arxiv.org/pdf/2206.13732). |
|
|
42 |
- [**MICCAI'24**] Gait Patterns as Biomarkers: A Video-Based Approach for Classifying Scoliosis, [*Paper*](https://arxiv.org/pdf/2407.05726), [*Dataset*](https://zhouzi180.github.io/Scoliosis1K), and [*ScoNet Code*](opengait/modeling/models/sconet.py). |
|
|
43 |
- [**CVPR'24**] BigGait: Learning Gait Representation You Want by Large Vision Models. [*Paper*](https://arxiv.org/pdf/2402.19122.pdf), and [*BigGait Code*](opengait/modeling/models/BigGait.py). |
|
|
44 |
- [**AAAI'24**] SkeletonGait++: Gait Recognition Using Skeleton Maps. [*Paper*](https://arxiv.org/pdf/2311.13444.pdf), and [*SkeletonGait++ Code*](opengait/modeling/models/skeletongait%2B%2B.py). |
|
|
45 |
- [**AAAI'24**] Cross-Covariate Gait Recognition: A Benchmark. [*Paper*](https://arxiv.org/pdf/2312.14404.pdf), [*CCGR Dataset*](https://github.com/ShinanZou/CCGR), and [*ParsingGait Code*](https://github.com/ShiqiYu/OpenGait/blob/master/opengait/modeling/models/deepgaitv2.py). |
|
|
46 |
- [**Arxiv'23**] Exploring Deep Models for Practical Gait Recognition. [*Paper*](https://arxiv.org/pdf/2303.03301.pdf), [*DeepGaitV2 Code*](https://github.com/ShiqiYu/OpenGait/blob/master/opengait/modeling/models/deepgaitv2.py), and [*SwinGait Code*](https://github.com/ShiqiYu/OpenGait/blob/master/opengait/modeling/models/swingait.py). |
|
|
47 |
- [**PAMI'23**] Learning Gait Representation from Massive Unlabelled Walking Videos: A Benchmark, [*Paper*](https://ieeexplore.ieee.org/document/10242019), [*GaitLU-1M Dataset*](datasets/GaitLU-1M/README.md), and [*GaitSSB Code*](opengait/modeling/models/gaitssb.py). |
|
|
48 |
- [**CVPR'23**] LidarGait: Benchmarking 3D Gait Recognition with Point Clouds, [*Paper*](https://openaccess.thecvf.com/content/CVPR2023/papers/Shen_LidarGait_Benchmarking_3D_Gait_Recognition_With_Point_Clouds_CVPR_2023_paper.pdf), [*SUSTech1K Dataset*](https://lidargait.github.io) and [*LidarGait Code*](datasets/SUSTech1K/README.md). |
|
|
49 |
- [**CVPR'23**] OpenGait: Revisiting Gait Recognition Toward Better Practicality, [*Highlight Paper*](https://openaccess.thecvf.com/content/CVPR2023/papers/Fan_OpenGait_Revisiting_Gait_Recognition_Towards_Better_Practicality_CVPR_2023_paper.pdf), and [*GaitBase Code*](configs/gaitbase). |
|
|
50 |
- [**ECCV'22**] GaitEdge: Beyond Plain End-to-end Gait Recognition for Better Practicality, [*Paper*](https://arxiv.org/pdf/2203.03972), and [*GaitEdge Code*](configs/gaitedge/README.md). |
|
|
51 |
|
|
|
52 |
## A Real Gait Recognition System: All-in-One-Gait |
|
|
53 |
<div align="center"> |
|
|
54 |
<img src="./assets/probe1-After.gif" width = "455" height = "256" alt="probe1-After" /> |
|
|
55 |
</div> |
|
|
56 |
|
|
|
57 |
The workflow of [All-in-One-Gait](https://github.com/jdyjjj/All-in-One-Gait) involves the processes of pedestrian tracking, segmentation and recognition. |
|
|
58 |
See [here](https://github.com/jdyjjj/All-in-One-Gait) for details. |
|
|
59 |
|
|
|
60 |
## Highlighted features |
|
|
61 |
- **Multiple Dataset supported**: [CASIA-B](http://www.cbsr.ia.ac.cn/english/Gait%20Databases.asp), [OUMVLP](http://www.am.sanken.osaka-u.ac.jp/BiometricDB/GaitMVLP.html), [SUSTech1K](https://lidargait.github.io), [HID](http://hid2022.iapr-tc4.org/), [GREW](https://www.grew-benchmark.org), [Gait3D](https://github.com/Gait3D/Gait3D-Benchmark), [CCPG](https://openaccess.thecvf.com/content/CVPR2023/papers/Li_An_In-Depth_Exploration_of_Person_Re-Identification_and_Gait_Recognition_in_CVPR_2023_paper.pdf), [CASIA-E](https://www.scidb.cn/en/detail?dataSetId=57be0e918db743279baf44a38d013a06), and [GaitLU-1M](https://ieeexplore.ieee.org/document/10242019). |
|
|
62 |
- **Multiple Models Support**: We reproduced several SOTA methods and reached the same or even better performance. |
|
|
63 |
- **DDP Support**: The officially recommended [`Distributed Data Parallel (DDP)`](https://pytorch.org/tutorials/intermediate/ddp_tutorial.html) mode is used during both the training and testing phases. |
|
|
64 |
- **AMP Support**: The [`Auto Mixed Precision (AMP)`](https://pytorch.org/tutorials/recipes/recipes/amp_recipe.html?highlight=amp) option is available. |
|
|
65 |
- **Nice log**: We use [`tensorboard`](https://pytorch.org/docs/stable/tensorboard.html) and `logging` to log everything, which looks pretty. |
|
|
66 |
|
|
|
67 |
## Getting Started |
|
|
68 |
|
|
|
69 |
|
|
|
70 |
Please see [0.get_started.md](docs/0.get_started.md). We also provide the following tutorials for your reference: |
|
|
71 |
- [Prepare dataset](docs/2.prepare_dataset.md) |
|
|
72 |
- [Detailed configuration](docs/3.detailed_config.md) |
|
|
73 |
- [Customize model](docs/4.how_to_create_your_model.md) |
|
|
74 |
- [Advanced usages](docs/5.advanced_usages.md) |
|
|
75 |
|
|
|
76 |
## Model Zoo |
|
|
77 |
✨✨✨You can find all the checkpoint files at [](https://huggingface.co/opengait/OpenGait/)✨✨✨! |
|
|
78 |
|
|
|
79 |
|
|
|
80 |
The result list of appearance-based gait recognition is available [here](docs/1.model_zoo.md). |
|
|
81 |
|
|
|
82 |
The result list of pose-based gait recognition is available [here](./docs/1.1.skeleton_model_zoo.md). |
|
|
83 |
|
|
|
84 |
|
|
|
85 |
## Authors: |
|
|
86 |
|
|
|
87 |
- [Chao Fan (樊超)](https://chaofan996.github.io), 12131100@mail.sustech.edu.cn |
|
|
88 |
- [Chuanfu Shen (沈川福)](https://scholar.google.com/citations?user=jKJt7rsAAAAJ&hl=en&oi=ao), 11950016@mail.sustech.edu.cn |
|
|
89 |
- [Junhao Liang (梁峻豪)](https://faculty.sustech.edu.cn/?p=95401&tagid=yusq&cat=2&iscss=1&snapid=1&orderby=date), 12132342@mail.sustech.edu.cn |
|
|
90 |
|
|
|
91 |
Now OpenGait is mainly maintained by [Dongyang Jin (金冬阳)](https://github.com/jdyjjj), 11911221@mail.sustech.edu.cn |
|
|
92 |
|
|
|
93 |
## Acknowledgement |
|
|
94 |
- GLN: [Saihui Hou (侯赛辉)](http://home.ustc.edu.cn/~saihui/index_english.html) |
|
|
95 |
- GaitGL: [Beibei Lin (林贝贝)](https://scholar.google.com/citations?user=KyvHam4AAAAJ&hl=en&oi=ao) |
|
|
96 |
- GREW: [GREW TEAM](https://github.com/XiandaGuo/GREW-Benchmark) |
|
|
97 |
|
|
|
98 |
- FastPoseGait Team: [FastPoseGait Team](https://github.com/BNU-IVC/FastPoseGait) |
|
|
99 |
|
|
|
100 |
- Gait3D Team: [Gait3D Team](https://gait3d.github.io/) |
|
|
101 |
|
|
|
102 |
## Citation |
|
|
103 |
|
|
|
104 |
``` |
|
|
105 |
@InProceedings{Fan_2023_CVPR, |
|
|
106 |
author = {Fan, Chao and Liang, Junhao and Shen, Chuanfu and Hou, Saihui and Huang, Yongzhen and Yu, Shiqi}, |
|
|
107 |
title = {OpenGait: Revisiting Gait Recognition Towards Better Practicality}, |
|
|
108 |
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, |
|
|
109 |
month = {June}, |
|
|
110 |
year = {2023}, |
|
|
111 |
pages = {9707-9716} |
|
|
112 |
} |
|
|
113 |
``` |
|
|
114 |
|
|
|
115 |
**Note:** |
|
|
116 |
This code is only used for **academic purposes**, people cannot use this code for anything that might be considered commercial use. |