We list some common issues faced by many users and their corresponding solutions here.
Feel free to enrich the list if you find any frequent issues and have ways to help others to solve them.
If the contents here do not cover your issue, please create an issue using the provided templates and make sure you fill in all required information in the template.
Unable to install xtcocotools
Try to install it using pypi manually pip install xtcocotools
.
git clone https://github.com/jin-s13/xtcocoapi
cd xtcocoapi
python setup.py install
No matching distribution found for xtcocotools>=1.6
Install cython by pip install cython
.
git clone https://github.com/jin-s13/xtcocoapi
cd xtcocoapi
python setup.py install
"No module named 'mmcv.ops'"; "No module named 'mmcv._ext'"
Uninstall existing mmcv in the environment using pip uninstall mmcv
.
You may refer to this conversion tool to prepare your data.
Here is an example of the coco-type json.
In the coco-type json, we need "categories", "annotations" and "images". "categories" contain some basic information of the dataset, e.g. class name and keypoint names.
"images" contain image-level information. We need "id", "file_name", "height", "width". Others are optional.
Note: (1) It is okay that "id"s are not continuous or not sorted (e.g. 1000, 40, 352, 333 ...).
"annotations" contain instance-level information. We need "image_id", "id", "keypoints", "num_keypoints", "bbox", "iscrowd", "area", "category_id". Others are optional.
Note: (1) "num_keypoints" means the number of visible keypoints. (2) By default, please set "iscrowd: 0". (3) "area" can be calculated using the bbox (area = w * h) (4) Simply set "category_id: 1". (5) The "image_id" in "annotations" should match the "id" in "images".
We can estimate the bounding box of a person as the minimal box that tightly bounds all the keypoints.
Just set the area
of the person as the area of the bounding boxes. During evaluation, please set use_area=False
as in this example.
COCO_val2017_detections_AP_H_56_person.json
? Can I train pose models without it?"COCO_val2017_detections_AP_H_56_person.json" contains the "detected" human bounding boxes for COCO validation set, which are generated by FasterRCNN.
One can choose to use gt bounding boxes to evaluate models, by setting use_gt_bbox=True
and bbox_file=''
. Or one can use detected boxes to evaluate
the generalizability of models, by setting use_gt_bbox=False
and bbox_file='COCO_val2017_detections_AP_H_56_person.json'
.
Set the environment variables MASTER_PORT=XXX
. For example,
MASTER_PORT=29517 GPUS=16 GPUS_PER_NODE=8 CPUS_PER_TASK=2 ./tools/slurm_train.sh Test res50 configs/body/2D_Kpt_SV_RGB_Img/topdown_hm/coco/res50_coco_256x192.py work_dirs/res50_coco_256x192
It's normal that some layers in the pretrained model are not used in the pose model. ImageNet-pretrained classification network and the pose network may have different architectures (e.g. no classification head). So some unexpected keys in source state dict is actually expected.
Refer to Use Pre-Trained Model,
in order to use the pre-trained model for the whole network (backbone + head), the new config adds the link of pre-trained models in the load_from
.
And to use backbone for pre-training, you can change pretrained
value in the backbone dict of config files to the checkpoint path / url.
When training, the unexpected keys will be ignored.
Use TensorboardLoggerHook
in log_config
like
python
log_config=dict(interval=20, hooks=[dict(type='TensorboardLoggerHook')])
You can refer to tutorials/6_customize_runtime.md and the example config.
Use smaller log interval. For example, change interval=50
to interval=1
in the config.
How to fix stages of backbone when finetuning a model ?
You can refer to def _freeze_stages()
and frozen_stages
,
reminding to set find_unused_parameters = True
in config files for distributed training or testing.
How to evaluate on MPII test dataset?
Since we do not have the ground-truth for test dataset, we cannot evaluate it 'locally'.
If you would like to evaluate the performance on test set, you have to upload the pred.mat (which is generated during testing) to the official server via email, according to the MPII guideline.
For top-down 2d pose estimation, why predicted joint coordinates can be out of the bounding box (bbox)?
We do not directly use the bbox to crop the image. bbox will be first transformed to center & scale, and the scale will be multiplied by a factor (1.25) to include some context. If the ratio of width/height is different from that of model input (possibly 192/256), we will adjust the bbox.
Run demos with --device=cpu
.
For top-down models, try to edit the config file. For example,
flip_test=False
in topdown-res50.post_process='default'
in topdown-res50.For bottom-up models, try to edit the config file. For example,
flip_test=False
in AE-res50.adjust=False
in AE-res50.refine=False
in AE-res50.Why is the onnx model converted by mmpose throwing error when converting to other frameworks such as TensorRT?
For now, we can only make sure that models in mmpose are onnx-compatible. However, some operations in onnx may be unsupported by your target framework for deployment, e.g. TensorRT in this issue. When such situation occurs, we suggest you raise an issue and ask the community to help as long as pytorch2onnx.py
works well and is verified numerically.