The paucity of videos in current action classification datasets (UCF-101 and HMDB-51) has made it difficult to identify good video architectures, as most methods obtain similar performance on existing small-scale benchmarks. This paper re-evaluates state-of-the-art architectures in light of the new Kinetics Human Action Video dataset. Kinetics has two orders of magnitude more data, with 400 human action classes and over 400 clips per class, and is collected from realistic, challenging YouTube videos. We provide an analysis on how current architectures fare on the task of action classification on this dataset and how much performance improves on the smaller benchmark datasets after pre-training on Kinetics. We also introduce a new Two-Stream Inflated 3D ConvNet (I3D) that is based on 2D ConvNet inflation: filters and pooling kernels of very deep image classification ConvNets are expanded into 3D, making it possible to learn seamless spatio-temporal feature extractors from video while leveraging successful ImageNet architecture designs and even their parameters. We show that, after pre-training on Kinetics, I3D models considerably improve upon the state-of-the-art in action classification, reaching 80.9% on HMDB-51 and 98.0% on UCF-101.
@inproceedings{inproceedings,
author = {Carreira, J. and Zisserman, Andrew},
year = {2017},
month = {07},
pages = {4724-4733},
title = {Quo Vadis, Action Recognition? A New Model and the Kinetics Dataset},
doi = {10.1109/CVPR.2017.502}
}
@article{NonLocal2018,
author = {Xiaolong Wang and Ross Girshick and Abhinav Gupta and Kaiming He},
title = {Non-local Neural Networks},
journal = {CVPR},
year = {2018}
}
config | resolution | gpus | backbone | pretrain | top1 acc | top5 acc | inference_time(video/s) | gpu_mem(M) | ckpt | log | json |
---|---|---|---|---|---|---|---|---|---|---|---|
i3d_r50_32x2x1_100e_kinetics400_rgb | 340x256 | 8 | ResNet50 | ImageNet | 72.68 | 90.78 | 1.7 (320x3 frames) | 5170 | ckpt | log | json |
i3d_r50_32x2x1_100e_kinetics400_rgb | short-side 256 | 8 | ResNet50 | ImageNet | 73.27 | 90.92 | x | 5170 | ckpt | log | json |
i3d_r50_video_32x2x1_100e_kinetics400_rgb | short-side 256p | 8 | ResNet50 | ImageNet | 72.85 | 90.75 | x | 5170 | ckpt | log | json |
i3d_r50_dense_32x2x1_100e_kinetics400_rgb | 340x256 | 8x2 | ResNet50 | ImageNet | 72.77 | 90.57 | 1.7 (320x3 frames) | 5170 | ckpt | log | json |
i3d_r50_dense_32x2x1_100e_kinetics400_rgb | short-side 256 | 8 | ResNet50 | ImageNet | 73.48 | 91.00 | x | 5170 | ckpt | log | json |
i3d_r50_lazy_32x2x1_100e_kinetics400_rgb | 340x256 | 8 | ResNet50 | ImageNet | 72.32 | 90.72 | 1.8 (320x3 frames) | 5170 | ckpt | log | json |
i3d_r50_lazy_32x2x1_100e_kinetics400_rgb | short-side 256 | 8 | ResNet50 | ImageNet | 73.24 | 90.99 | x | 5170 | ckpt | log | json |
i3d_nl_embedded_gaussian_r50_32x2x1_100e_kinetics400_rgb | short-side 256p | 8x4 | ResNet50 | ImageNet | 74.71 | 91.81 | x | 6438 | ckpt | log | json |
i3d_nl_gaussian_r50_32x2x1_100e_kinetics400_rgb | short-side 256p | 8x4 | ResNet50 | ImageNet | 73.37 | 91.26 | x | 4944 | ckpt | log | json |
i3d_nl_dot_product_r50_32x2x1_100e_kinetics400_rgb | short-side 256p | 8x4 | ResNet50 | ImageNet | 73.92 | 91.59 | x | 4832 | ckpt | log | json |
:::{note}
:::
For more details on data preparation, you can refer to Kinetics400 in Data Preparation.
You can use the following command to train a model.
python tools/train.py ${CONFIG_FILE} [optional arguments]
Example: train I3D model on Kinetics-400 dataset in a deterministic option with periodic validation.
python tools/train.py configs/recognition/i3d/i3d_r50_32x2x1_100e_kinetics400_rgb.py \
--work-dir work_dirs/i3d_r50_32x2x1_100e_kinetics400_rgb \
--validate --seed 0 --deterministic
For more details, you can refer to Training setting part in getting_started.
You can use the following command to test a model.
python tools/test.py ${CONFIG_FILE} ${CHECKPOINT_FILE} [optional arguments]
Example: test I3D model on Kinetics-400 dataset and dump the result to a json file.
python tools/test.py configs/recognition/i3d/i3d_r50_32x2x1_100e_kinetics400_rgb.py \
checkpoints/SOME_CHECKPOINT.pth --eval top_k_accuracy mean_class_accuracy \
--out result.json --average-clips prob
For more details, you can refer to Test a dataset part in getting_started.