Diff of /configs/dpt/README.md [000000] .. [4e96d3]

Switch to unified view

a b/configs/dpt/README.md
1
# Vision Transformer for Dense Prediction
2
3
## Introduction
4
5
<!-- [ALGORITHM] -->
6
7
<a href="https://github.com/isl-org/DPT">Official Repo</a>
8
9
<a href="https://github.com/open-mmlab/mmsegmentation/blob/v0.17.0/mmseg/models/decode_heads/dpt_head.py#L215">Code Snippet</a>
10
11
## Abstract
12
13
<!-- [ABSTRACT] -->
14
15
We introduce dense vision transformers, an architecture that leverages vision transformers in place of convolutional networks as a backbone for dense prediction tasks. We assemble tokens from various stages of the vision transformer into image-like representations at various resolutions and progressively combine them into full-resolution predictions using a convolutional decoder. The transformer backbone processes representations at a constant and relatively high resolution and has a global receptive field at every stage. These properties allow the dense vision transformer to provide finer-grained and more globally coherent predictions when compared to fully-convolutional networks. Our experiments show that this architecture yields substantial improvements on dense prediction tasks, especially when a large amount of training data is available. For monocular depth estimation, we observe an improvement of up to 28% in relative performance when compared to a state-of-the-art fully-convolutional network. When applied to semantic segmentation, dense vision transformers set a new state of the art on ADE20K with 49.02% mIoU. We further show that the architecture can be fine-tuned on smaller datasets such as NYUv2, KITTI, and Pascal Context where it also sets the new state of the art. Our models are available at [this https URL](https://github.com/isl-org/DPT).
16
17
<!-- [IMAGE] -->
18
<div align=center>
19
<img src="https://user-images.githubusercontent.com/24582831/142901057-00aabea5-dab4-43d3-a14a-5f73eb5dd9b9.png" width="80%"/>
20
</div>
21
22
<details>
23
<summary align="right"><a href="https://arxiv.org/abs/2103.13413">DPT (ArXiv'2021)</a></summary>
24
25
```latex
26
@article{dosoViTskiy2020,
27
  title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale},
28
  author={DosoViTskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and  Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil},
29
  journal={arXiv preprint arXiv:2010.11929},
30
  year={2020}
31
}
32
33
@article{Ranftl2021,
34
  author    = {Ren\'{e} Ranftl and Alexey Bochkovskiy and Vladlen Koltun},
35
  title     = {Vision Transformers for Dense Prediction},
36
  journal   = {ArXiv preprint},
37
  year      = {2021},
38
}
39
```
40
41
</details>
42
43
## Usage
44
45
To use other repositories' pre-trained models, it is necessary to convert keys.
46
47
We provide a script [`vit2mmseg.py`](../../tools/model_converters/vit2mmseg.py) in the tools directory to convert the key of models from [timm](https://github.com/rwightman/pytorch-image-models/blob/master/timm/models/vision_transformer.py) to MMSegmentation style.
48
49
```shell
50
python tools/model_converters/vit2mmseg.py ${PRETRAIN_PATH} ${STORE_PATH}
51
```
52
53
E.g.
54
55
```shell
56
python tools/model_converters/vit2mmseg.py https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-vitjx/jx_vit_base_p16_224-80ecf9dd.pth pretrain/jx_vit_base_p16_224-80ecf9dd.pth
57
```
58
59
This script convert model from `PRETRAIN_PATH` and store the converted model in `STORE_PATH`.
60
61
## Results and models
62
63
### ADE20K
64
65
| Method  | Backbone | Crop Size | Lr schd | Mem (GB) | Inf time (fps) |  mIoU | mIoU(ms+flip) | config                                                                                                                 | download                                                                                                                                                                                                                                                                                                                               |
66
| ------- | -------- | --------- | ------: | -------- | -------------- | ----: | ------------: | ---------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
67
| DPT | ViT-B | 512x512  | 160000  | 8.09 | 10.41 | 46.97 | 48.34 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/dpt/dpt_vit-b16_512x512_160k_ade20k.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/dpt/dpt_vit-b16_512x512_160k_ade20k/dpt_vit-b16_512x512_160k_ade20k-db31cf52.pth) &#124; [log](https://download.openmmlab.com/mmsegmentation/v0.5/dpt/dpt_vit-b16_512x512_160k_ade20k/dpt_vit-b16_512x512_160k_ade20k-20210809_172025.log.json) |