Diff of /docs/useful_tools.md [000000] .. [6d389a]

Switch to unified view

a b/docs/useful_tools.md
1
Apart from training/testing scripts, We provide lots of useful tools under the `tools/` directory.
2
3
## Useful Tools Link
4
5
<!-- TOC -->
6
7
- [Useful Tools Link](#useful-tools-link)
8
- [Log Analysis](#log-analysis)
9
- [Model Complexity](#model-complexity)
10
- [Model Conversion](#model-conversion)
11
  - [MMAction2 model to ONNX (experimental)](#mmaction2-model-to-onnx-experimental)
12
  - [Prepare a model for publishing](#prepare-a-model-for-publishing)
13
- [Model Serving](#model-serving)
14
  - [1. Convert model from MMAction2 to TorchServe](#1-convert-model-from-mmaction2-to-torchserve)
15
  - [2. Build `mmaction-serve` docker image](#2-build-mmaction-serve-docker-image)
16
  - [3. Launch `mmaction-serve`](#3-launch-mmaction-serve)
17
  - [4. Test deployment](#4-test-deployment)
18
- [Miscellaneous](#miscellaneous)
19
  - [Evaluating a metric](#evaluating-a-metric)
20
  - [Print the entire config](#print-the-entire-config)
21
  - [Check videos](#check-videos)
22
23
<!-- TOC -->
24
25
## Log Analysis
26
27
`tools/analysis/analyze_logs.py` plots loss/top-k acc curves given a training log file. Run `pip install seaborn` first to install the dependency.
28
29
![acc_curve_image](https://github.com/open-mmlab/mmaction2/raw/master/resources/acc_curve.png)
30
31
```shell
32
python tools/analysis/analyze_logs.py plot_curve ${JSON_LOGS} [--keys ${KEYS}] [--title ${TITLE}] [--legend ${LEGEND}] [--backend ${BACKEND}] [--style ${STYLE}] [--out ${OUT_FILE}]
33
```
34
35
Examples:
36
37
- Plot the classification loss of some run.
38
39
    ```shell
40
    python tools/analysis/analyze_logs.py plot_curve log.json --keys loss_cls --legend loss_cls
41
    ```
42
43
- Plot the top-1 acc and top-5 acc of some run, and save the figure to a pdf.
44
45
    ```shell
46
    python tools/analysis/analyze_logs.py plot_curve log.json --keys top1_acc top5_acc --out results.pdf
47
    ```
48
49
- Compare the top-1 acc of two runs in the same figure.
50
51
    ```shell
52
    python tools/analysis/analyze_logs.py plot_curve log1.json log2.json --keys top1_acc --legend run1 run2
53
    ```
54
55
    You can also compute the average training speed.
56
57
    ```shell
58
    python tools/analysis/analyze_logs.py cal_train_time ${JSON_LOGS} [--include-outliers]
59
    ```
60
61
- Compute the average training speed for a config file.
62
63
    ```shell
64
    python tools/analysis/analyze_logs.py cal_train_time work_dirs/some_exp/20200422_153324.log.json
65
    ```
66
67
    The output is expected to be like the following.
68
69
    ```text
70
    -----Analyze train time of work_dirs/some_exp/20200422_153324.log.json-----
71
    slowest epoch 60, average time is 0.9736
72
    fastest epoch 18, average time is 0.9001
73
    time std over epochs is 0.0177
74
    average iter time: 0.9330 s/iter
75
    ```
76
77
## Model Complexity
78
79
`/tools/analysis/get_flops.py` is a script adapted from [flops-counter.pytorch](https://github.com/sovrasov/flops-counter.pytorch) to compute the FLOPs and params of a given model.
80
81
```shell
82
python tools/analysis/get_flops.py ${CONFIG_FILE} [--shape ${INPUT_SHAPE}]
83
```
84
85
We will get the result like this
86
87
```text
88
==============================
89
Input shape: (1, 3, 32, 340, 256)
90
Flops: 37.1 GMac
91
Params: 28.04 M
92
==============================
93
```
94
95
:::{note}
96
This tool is still experimental and we do not guarantee that the number is absolutely correct.
97
You may use the result for simple comparisons, but double check it before you adopt it in technical reports or papers.
98
99
(1) FLOPs are related to the input shape while parameters are not. The default input shape is (1, 3, 340, 256) for 2D recognizer, (1, 3, 32, 340, 256) for 3D recognizer.
100
(2) Some operators are not counted into FLOPs like GN and custom operators. Refer to [`mmcv.cnn.get_model_complexity_info()`](https://github.com/open-mmlab/mmcv/blob/master/mmcv/cnn/utils/flops_counter.py) for details.
101
:::
102
103
## Model Conversion
104
105
### MMAction2 model to ONNX (experimental)
106
107
`/tools/deployment/pytorch2onnx.py` is a script to convert model to [ONNX](https://github.com/onnx/onnx) format.
108
It also supports comparing the output results between Pytorch and ONNX model for verification.
109
Run `pip install onnx onnxruntime` first to install the dependency.
110
Please note that a softmax layer could be added for recognizers by `--softmax` option, in order to get predictions in range `[0, 1]`.
111
112
- For recognizers, please run:
113
114
    ```shell
115
    python tools/deployment/pytorch2onnx.py $CONFIG_PATH $CHECKPOINT_PATH --shape $SHAPE --verify
116
    ```
117
118
- For localizers, please run:
119
120
    ```shell
121
    python tools/deployment/pytorch2onnx.py $CONFIG_PATH $CHECKPOINT_PATH --is-localizer --shape $SHAPE --verify
122
    ```
123
124
### Prepare a model for publishing
125
126
`tools/deployment/publish_model.py` helps users to prepare their model for publishing.
127
128
Before you upload a model to AWS, you may want to:
129
130
(1) convert model weights to CPU tensors.
131
(2) delete the optimizer states.
132
(3) compute the hash of the checkpoint file and append the hash id to the filename.
133
134
```shell
135
python tools/deployment/publish_model.py ${INPUT_FILENAME} ${OUTPUT_FILENAME}
136
```
137
138
E.g.,
139
140
```shell
141
python tools/deployment/publish_model.py work_dirs/tsn_r50_1x1x3_100e_kinetics400_rgb/latest.pth tsn_r50_1x1x3_100e_kinetics400_rgb.pth
142
```
143
144
The final output filename will be `tsn_r50_1x1x3_100e_kinetics400_rgb-{hash id}.pth`.
145
146
## Model Serving
147
148
In order to serve an `MMAction2` model with [`TorchServe`](https://pytorch.org/serve/), you can follow the steps:
149
150
### 1. Convert model from MMAction2 to TorchServe
151
152
```shell
153
python tools/deployment/mmaction2torchserve.py ${CONFIG_FILE} ${CHECKPOINT_FILE} \
154
--output_folder ${MODEL_STORE} \
155
--model-name ${MODEL_NAME} \
156
--label-file ${LABLE_FILE}
157
158
```
159
160
### 2. Build `mmaction-serve` docker image
161
162
```shell
163
DOCKER_BUILDKIT=1 docker build -t mmaction-serve:latest docker/serve/
164
```
165
166
### 3. Launch `mmaction-serve`
167
168
Check the official docs for [running TorchServe with docker](https://github.com/pytorch/serve/blob/master/docker/README.md#running-torchserve-in-a-production-docker-environment).
169
170
Example:
171
172
```shell
173
docker run --rm \
174
--cpus 8 \
175
--gpus device=0 \
176
-p8080:8080 -p8081:8081 -p8082:8082 \
177
--mount type=bind,source=$MODEL_STORE,target=/home/model-server/model-store \
178
mmaction-serve:latest
179
```
180
181
**Note**: ${MODEL_STORE} needs to be an absolute path.
182
[Read the docs](https://github.com/pytorch/serve/blob/072f5d088cce9bb64b2a18af065886c9b01b317b/docs/rest_api.md) about the Inference (8080), Management (8081) and Metrics (8082) APis
183
184
### 4. Test deployment
185
186
```shell
187
# Assume you are under the directory `mmaction2`
188
curl http://127.0.0.1:8080/predictions/${MODEL_NAME} -T demo/demo.mp4
189
```
190
191
You should obtain a response similar to:
192
193
```json
194
{
195
  "arm wrestling": 1.0,
196
  "rock scissors paper": 4.962051880497143e-10,
197
  "shaking hands": 3.9761663406245873e-10,
198
  "massaging feet": 1.1924419784925533e-10,
199
  "stretching leg": 1.0601879096849842e-10
200
}
201
```
202
203
## Miscellaneous
204
205
### Evaluating a metric
206
207
`tools/analysis/eval_metric.py` evaluates certain metrics of the results saved in a file according to a config file.
208
209
The saved result file is created on `tools/test.py` by setting the arguments `--out ${RESULT_FILE}` to indicate the result file,
210
which stores the final output of the whole model.
211
212
```shell
213
python tools/analysis/eval_metric.py ${CONFIG_FILE} ${RESULT_FILE} [--eval ${EVAL_METRICS}] [--cfg-options ${CFG_OPTIONS}] [--eval-options ${EVAL_OPTIONS}]
214
```
215
216
### Print the entire config
217
218
`tools/analysis/print_config.py` prints the whole config verbatim, expanding all its imports.
219
220
```shell
221
python tools/print_config.py ${CONFIG} [-h] [--options ${OPTIONS [OPTIONS...]}]
222
```
223
224
### Check videos
225
226
`tools/analysis/check_videos.py` uses specified video encoder to iterate all samples that are specified by the input configuration file, looks for invalid videos (corrupted or missing), and saves the corresponding file path to the output file. Please note that after deleting invalid videos, users need to regenerate the video file list.
227
228
```shell
229
python tools/analysis/check_videos.py ${CONFIG} [-h] [--options OPTIONS [OPTIONS ...]] [--cfg-options CFG_OPTIONS [CFG_OPTIONS ...]] [--output-file OUTPUT_FILE] [--split SPLIT] [--decoder DECODER] [--num-processes NUM_PROCESSES] [--remove-corrupted-videos]
230
```