Deep convolutional networks have achieved great success for visual recognition in still images. However, for action recognition in videos, the advantage over traditional methods is not so evident. This paper aims to discover the principles to design effective ConvNet architectures for action recognition in videos and learn these models given limited training samples. Our first contribution is temporal segment network (TSN), a novel framework for video-based action recognition. which is based on the idea of long-range temporal structure modeling. It combines a sparse temporal sampling strategy and video-level supervision to enable efficient and effective learning using the whole action video. The other contribution is our study on a series of good practices in learning ConvNets on video data with the help of temporal segment network. Our approach obtains the state-the-of-art performance on the datasets of HMDB51 ( 69.4%) and UCF101 (94.2%). We also visualize the learned ConvNet models, which qualitatively demonstrates the effectiveness of temporal segment network and the proposed good practices.
@inproceedings{wang2016temporal,
title={Temporal segment networks: Towards good practices for deep action recognition},
author={Wang, Limin and Xiong, Yuanjun and Wang, Zhe and Qiao, Yu and Lin, Dahua and Tang, Xiaoou and Van Gool, Luc},
booktitle={European conference on computer vision},
pages={20--36},
year={2016},
organization={Springer}
}
config | gpus | backbone | pretrain | top1 acc | top5 acc | gpu_mem(M) | ckpt | log | json |
---|---|---|---|---|---|---|---|---|---|
tsn_r50_1x1x3_75e_ucf101_rgb [1] | 8 | ResNet50 | ImageNet | 83.03 | 96.78 | 8332 | ckpt | log | json |
[1] We report the performance on UCF-101 split1.
config | gpus | backbone | pretrain | top1 acc | top5 acc | gpu_mem(M) | ckpt | log | json |
---|---|---|---|---|---|---|---|---|---|
tsn_r50_video_1x1x8_100e_diving48_rgb | 8 | ResNet50 | ImageNet | 71.27 | 95.74 | 5699 | ckpt | log | json |
tsn_r50_video_1x1x16_100e_diving48_rgb | 8 | ResNet50 | ImageNet | 76.75 | 96.95 | 5705 | ckpt | log | json |
config | gpus | backbone | pretrain | top1 acc | top5 acc | gpu_mem(M) | ckpt | log | json |
---|---|---|---|---|---|---|---|---|---|
tsn_r50_1x1x8_50e_hmdb51_imagenet_rgb | 8 | ResNet50 | ImageNet | 48.95 | 80.19 | 21535 | ckpt | log | json |
tsn_r50_1x1x8_50e_hmdb51_kinetics400_rgb | 8 | ResNet50 | Kinetics400 | 56.08 | 84.31 | 21535 | ckpt | log | json |
tsn_r50_1x1x8_50e_hmdb51_mit_rgb | 8 | ResNet50 | Moments | 54.25 | 83.86 | 21535 | ckpt | log | json |
config | resolution | gpus | backbone | pretrain | top1 acc | top5 acc | reference top1 acc | reference top5 acc | inference_time(video/s) | gpu_mem(M) | ckpt | log | json |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
tsn_r50_1x1x3_100e_kinetics400_rgb | 340x256 | 8 | ResNet50 | ImageNet | 70.60 | 89.26 | x | x | 4.3 (25x10 frames) | 8344 | ckpt | log | json |
tsn_r50_1x1x3_100e_kinetics400_rgb | short-side 256 | 8 | ResNet50 | ImageNet | 70.42 | 89.03 | x | x | x | 8343 | ckpt | log | json |
tsn_r50_dense_1x1x5_50e_kinetics400_rgb | 340x256 | 8x3 | ResNet50 | ImageNet | 70.18 | 89.10 | 69.15 | 88.56 | 12.7 (8x10 frames) | 7028 | ckpt | log | json |
tsn_r50_320p_1x1x3_100e_kinetics400_rgb | short-side 320 | 8x2 | ResNet50 | ImageNet | 70.91 | 89.51 | x | x | 10.7 (25x3 frames) | 8344 | ckpt | log | json |
tsn_r50_320p_1x1x3_110e_kinetics400_flow | short-side 320 | 8x2 | ResNet50 | ImageNet | 55.70 | 79.85 | x | x | x | 8471 | ckpt | log | json |
tsn_r50_320p_1x1x3_kinetics400_twostream [1: 1]* | x | x | ResNet50 | ImageNet | 72.76 | 90.52 | x | x | x | x | x | x | x |
tsn_r50_1x1x8_100e_kinetics400_rgb | short-side 256 | 8 | ResNet50 | ImageNet | 71.80 | 90.17 | x | x | x | 8343 | ckpt | log | json |
tsn_r50_320p_1x1x8_100e_kinetics400_rgb | short-side 320 | 8x3 | ResNet50 | ImageNet | 72.41 | 90.55 | x | x | 11.1 (25x3 frames) | 8344 | ckpt | log | json |
tsn_r50_320p_1x1x8_110e_kinetics400_flow | short-side 320 | 8x4 | ResNet50 | ImageNet | 57.76 | 80.99 | x | x | x | 8473 | ckpt | log | json |
tsn_r50_320p_1x1x8_kinetics400_twostream [1: 1]* | x | x | ResNet50 | ImageNet | 74.64 | 91.77 | x | x | x | x | x | x | x |
tsn_r50_video_320p_1x1x3_100e_kinetics400_rgb | short-side 320 | 8 | ResNet50 | ImageNet | 71.11 | 90.04 | x | x | x | 8343 | ckpt | log | json |
tsn_r50_dense_1x1x8_100e_kinetics400_rgb | 340x256 | 8 | ResNet50 | ImageNet | 70.77 | 89.3 | 68.75 | 88.42 | 12.2 (8x10 frames) | 8344 | ckpt | log | json |
tsn_r50_video_1x1x8_100e_kinetics400_rgb | short-side 256 | 8 | ResNet50 | ImageNet | 71.14 | 89.63 | x | x | x | 21558 | ckpt | log | json |
tsn_r50_video_dense_1x1x8_100e_kinetics400_rgb | short-side 256 | 8 | ResNet50 | ImageNet | 70.40 | 89.12 | x | x | x | 21553 | ckpt | log | json |
Here, We use [1: 1] to indicate that we combine rgb and flow score with coefficients 1: 1 to get the two-stream prediction (without applying softmax).
It's possible and convenient to use a 3rd-party backbone for TSN under the framework of MMAction2, here we provide some examples for:
config | resolution | gpus | backbone | pretrain | top1 acc | top5 acc | ckpt | log | json |
---|---|---|---|---|---|---|---|---|---|
tsn_rn101_32x4d_320p_1x1x3_100e_kinetics400_rgb | short-side 320 | 8x2 | ResNeXt101-32x4d [MMCls] | ImageNet | 73.43 | 91.01 | ckpt | log | json |
tsn_dense161_320p_1x1x3_100e_kinetics400_rgb | short-side 320 | 8x2 | Densenet-161 [TorchVision] | ImageNet | 72.78 | 90.75 | ckpt | log | json |
tsn_swin_transformer_video_320p_1x1x3_100e_kinetics400_rgb | short-side 320 | 8 | Swin Transformer Base [timm] | ImageNet | 77.51 | 92.92 | ckpt | log | json |
In data benchmark, we compare:
config | resolution | training augmentation | testing protocol | top1 acc | top5 acc | ckpt | log | json |
---|---|---|---|---|---|---|---|---|
tsn_r50_multiscalecrop_340x256_1x1x3_100e_kinetics400_rgb | 340x256 | MultiScaleCrop | 25x10 frames | 70.60 | 89.26 | ckpt | log | json |
x | 340x256 | MultiScaleCrop | 25x3 frames | 70.52 | 89.39 | x | x | x |
tsn_r50_randomresizedcrop_340x256_1x1x3_100e_kinetics400_rgb | 340x256 | RandomResizedCrop | 25x10 frames | 70.11 | 89.01 | ckpt | log | json |
x | 340x256 | RandomResizedCrop | 25x3 frames | 69.95 | 89.02 | x | x | x |
tsn_r50_multiscalecrop_320p_1x1x3_100e_kinetics400_rgb | short-side 320 | MultiScaleCrop | 25x10 frames | 70.32 | 89.25 | ckpt | log | json |
x | short-side 320 | MultiScaleCrop | 25x3 frames | 70.54 | 89.39 | x | x | x |
tsn_r50_randomresizedcrop_320p_1x1x3_100e_kinetics400_rgb | short-side 320 | RandomResizedCrop | 25x10 frames | 70.44 | 89.23 | ckpt | log | json |
x | short-side 320 | RandomResizedCrop | 25x3 frames | 70.91 | 89.51 | x | x | x |
tsn_r50_multiscalecrop_256p_1x1x3_100e_kinetics400_rgb | short-side 256 | MultiScaleCrop | 25x10 frames | 70.42 | 89.03 | ckpt | log | json |
x | short-side 256 | MultiScaleCrop | 25x3 frames | 70.79 | 89.42 | x | x | x |
tsn_r50_randomresizedcrop_256p_1x1x3_100e_kinetics400_rgb | short-side 256 | RandomResizedCrop | 25x10 frames | 69.80 | 89.06 | ckpt | log | json |
x | short-side 256 | RandomResizedCrop | 25x3 frames | 70.48 | 89.89 | x | x | x |
config | resolution | backbone | pretrain | w. OmniSource | top1 acc | top5 acc | inference_time(video/s) | gpu_mem(M) | ckpt | log | json |
---|---|---|---|---|---|---|---|---|---|---|---|
tsn_r50_1x1x3_100e_kinetics400_rgb | 340x256 | ResNet50 | ImageNet | ❌ | 70.6 | 89.3 | 4.3 (25x10 frames) | 8344 | ckpt | log | json |
x | 340x256 | ResNet50 | ImageNet | ✔️ | 73.6 | 91.0 | x | 8344 | ckpt | x | x |
x | short-side 320 | ResNet50 | IG-1B [1] | ❌ | 73.1 | 90.4 | x | 8344 | ckpt | x | x |
x | short-side 320 | ResNet50 | IG-1B [1] | ✔️ | 75.7 | 91.9 | x | 8344 | ckpt | x | x |
[1] We obtain the pre-trained model from torch-hub, the pretrain model we used is resnet50_swsl
config | resolution | gpus | backbone | pretrain | top1 acc | top5 acc | inference_time(video/s) | gpu_mem(M) | ckpt | log | json |
---|---|---|---|---|---|---|---|---|---|---|---|
tsn_r50_video_1x1x8_100e_kinetics600_rgb | short-side 256 | 8x2 | ResNet50 | ImageNet | 74.8 | 92.3 | 11.1 (25x3 frames) | 8344 | ckpt | log | json |
config | resolution | gpus | backbone | pretrain | top1 acc | top5 acc | inference_time(video/s) | gpu_mem(M) | ckpt | log | json |
---|---|---|---|---|---|---|---|---|---|---|---|
tsn_r50_video_1x1x8_100e_kinetics700_rgb | short-side 256 | 8x2 | ResNet50 | ImageNet | 61.7 | 83.6 | 11.1 (25x3 frames) | 8344 | ckpt | log | json |
config | resolution | gpus | backbone | pretrain | top1 acc | top5 acc | reference top1 acc | reference top5 acc | gpu_mem(M) | ckpt | log | json |
---|---|---|---|---|---|---|---|---|---|---|---|---|
tsn_r50_1x1x8_50e_sthv1_rgb | height 100 | 8 | ResNet50 | ImageNet | 18.55 | 44.80 | 17.53 | 44.29 | 10978 | ckpt | log | json |
tsn_r50_1x1x16_50e_sthv1_rgb | height 100 | 8 | ResNet50 | ImageNet | 15.77 | 39.85 | 13.33 | 35.58 | 5691 | ckpt | log | json |
config | resolution | gpus | backbone | pretrain | top1 acc | top5 acc | reference top1 acc | reference top5 acc | gpu_mem(M) | ckpt | log | json |
---|---|---|---|---|---|---|---|---|---|---|---|---|
tsn_r50_1x1x8_50e_sthv2_rgb | height 256 | 8 | ResNet50 | ImageNet | 28.59 | 59.56 | x | x | 10966 | ckpt | log | json |
tsn_r50_1x1x16_50e_sthv2_rgb | height 256 | 8 | ResNet50 | ImageNet | 20.89 | 49.16 | x | x | 8337 | ckpt | log | json |
config | resolution | gpus | backbone | pretrain | top1 acc | top5 acc | gpu_mem(M) | ckpt | log | json |
---|---|---|---|---|---|---|---|---|---|---|
tsn_r50_1x1x6_100e_mit_rgb | short-side 256 | 8x2 | ResNet50 | ImageNet | 26.84 | 51.6 | 8339 | ckpt | log | json |
config | resolution | gpus | backbone | pretrain | mAP | gpu_mem(M) | ckpt | log | json |
---|---|---|---|---|---|---|---|---|---|
tsn_r101_1x1x5_50e_mmit_rgb | short-side 256 | 8x2 | ResNet101 | ImageNet | 61.09 | 10467 | ckpt | log | json |
config | resolution | gpus | backbone | pretrain | top1 acc | top5 acc | gpu_mem(M) | ckpt | log | json |
---|---|---|---|---|---|---|---|---|---|---|
tsn_r50_320p_1x1x8_50e_activitynet_video_rgb | short-side 320 | 8x1 | ResNet50 | Kinetics400 | 73.93 | 93.44 | 5692 | ckpt | log | json |
tsn_r50_320p_1x1x8_50e_activitynet_clip_rgb | short-side 320 | 8x1 | ResNet50 | Kinetics400 | 76.90 | 94.47 | 5692 | ckpt | log | json |
tsn_r50_320p_1x1x8_150e_activitynet_video_flow | 340x256 | 8x2 | ResNet50 | Kinetics400 | 57.51 | 83.02 | 5780 | ckpt | log | json |
tsn_r50_320p_1x1x8_150e_activitynet_clip_flow | 340x256 | 8x2 | ResNet50 | Kinetics400 | 59.51 | 82.69 | 5780 | ckpt | log | json |
config[1] | tag category | resolution | gpus | backbone | pretrain | mAP | HATNet[2] | HATNet-multi[2] | ckpt | log | json |
---|---|---|---|---|---|---|---|---|---|---|---|
tsn_r18_1x1x8_100e_hvu_action_rgb | action | short-side 256 | 8x2 | ResNet18 | ImageNet | 57.5 | 51.8 | 53.5 | ckpt | log | json |
tsn_r18_1x1x8_100e_hvu_scene_rgb | scene | short-side 256 | 8 | ResNet18 | ImageNet | 55.2 | 55.8 | 57.2 | ckpt | log | json |
tsn_r18_1x1x8_100e_hvu_object_rgb | object | short-side 256 | 8 | ResNet18 | ImageNet | 45.7 | 34.2 | 35.1 | ckpt | log | json |
tsn_r18_1x1x8_100e_hvu_event_rgb | event | short-side 256 | 8 | ResNet18 | ImageNet | 63.7 | 38.5 | 39.8 | ckpt | log | json |
tsn_r18_1x1x8_100e_hvu_concept_rgb | concept | short-side 256 | 8 | ResNet18 | ImageNet | 47.5 | 26.1 | 27.3 | ckpt | log | json |
tsn_r18_1x1x8_100e_hvu_attribute_rgb | attribute | short-side 256 | 8 | ResNet18 | ImageNet | 46.1 | 33.6 | 34.9 | ckpt | log | json |
- | Overall | short-side 256 | - | ResNet18 | ImageNet | 52.6 | 40.0 | 41.3 | - | - | - |
[1] For simplicity, we train a specific model for each tag category as the baselines for HVU.
[2] The performance of HATNet and HATNet-multi are from the paper Large Scale Holistic Video Understanding. The proposed HATNet is a 2 branch Convolution Network (one 2D branch, one 3D branch) and share the same backbone(ResNet18) with us. The inputs of HATNet are 16 or 32 frames long video clips (which is much larger than us), while the input resolution is coarser (112 instead of 224). HATNet is trained on each individual task (each tag category) while HATNet-multi is trained on multiple tasks. Since there is no released codes or models for the HATNet, we just include the performance reported by the original paper.
:::{note}
:::
For more details on data preparation, you can refer to
You can use the following command to train a model.
python tools/train.py ${CONFIG_FILE} [optional arguments]
Example: train TSN model on Kinetics-400 dataset in a deterministic option with periodic validation.
python tools/train.py configs/recognition/tsn/tsn_r50_1x1x3_100e_kinetics400_rgb.py \
--work-dir work_dirs/tsn_r50_1x1x3_100e_kinetics400_rgb \
--validate --seed 0 --deterministic
For more details, you can refer to Training setting part in getting_started.
You can use the following command to test a model.
python tools/test.py ${CONFIG_FILE} ${CHECKPOINT_FILE} [optional arguments]
Example: test TSN model on Kinetics-400 dataset and dump the result to a json file.
python tools/test.py configs/recognition/tsn/tsn_r50_1x1x3_100e_kinetics400_rgb.py \
checkpoints/SOME_CHECKPOINT.pth --eval top_k_accuracy mean_class_accuracy \
--out result.json
For more details, you can refer to Test a dataset part in getting_started.