Switch to unified view

a/README.md b/README.md
1
1
# BiomedCLIP-PubMedBERT_256-vit_base_patch16_224
2
# BiomedCLIP-PubMedBERT_256-vit_base_patch16_224
2
3
4
[BiomedCLIP](https://aka.ms/biomedclip-paper) is a biomedical vision-language foundation model that is pretrained on [PMC-15M](https://aka.ms/biomedclip-paper), a dataset of 15 million figure-caption pairs extracted from biomedical research articles in PubMed Central, using contrastive learning. 
3
[BiomedCLIP](https://aka.ms/biomedclip-paper) is a biomedical vision-language foundation model that is pretrained on [PMC-15M](https://aka.ms/biomedclip-paper), a dataset of 15 million figure-caption pairs extracted from biomedical research articles in PubMed Central, using contrastive learning. 
5
It uses PubMedBERT as the text encoder and Vision Transformer as the image encoder, with domain-specific adaptations.
4
It uses PubMedBERT as the text encoder and Vision Transformer as the image encoder, with domain-specific adaptations.
6
It can perform various vision-language processing (VLP) tasks such as cross-modal retrieval, image classification, and visual question answering. 
5
It can perform various vision-language processing (VLP) tasks such as cross-modal retrieval, image classification, and visual question answering. 
7
BiomedCLIP establishes new state of the art in a wide range of standard datasets, and substantially outperforms prior VLP approaches:
6
BiomedCLIP establishes new state of the art in a wide range of standard datasets, and substantially outperforms prior VLP approaches:
8
7
9
![](https://huggingface.co/microsoft/BiomedCLIP-PubMedBERT_256-vit_base_patch16_224/resolve/main/biomed-vlp-eval.svg)
8
![](https://huggingface.co/microsoft/BiomedCLIP-PubMedBERT_256-vit_base_patch16_224/resolve/main/biomed-vlp-eval.svg)
10
9
11
10
12
11
13
## Citation
12
## Citation
14
13
15
```bibtex
14
```bibtex
16
@misc{https://doi.org/10.48550/arXiv.2303.00915,
15
@misc{https://doi.org/10.48550/arXiv.2303.00915,
17
  doi = {10.48550/ARXIV.2303.00915},
16
  doi = {10.48550/ARXIV.2303.00915},
18
  url = {https://arxiv.org/abs/2303.00915},
17
  url = {https://arxiv.org/abs/2303.00915},
19
  author = {Zhang, Sheng and Xu, Yanbo and Usuyama, Naoto and Bagga, Jaspreet and Tinn, Robert and Preston, Sam and Rao, Rajesh and Wei, Mu and Valluri, Naveen and Wong, Cliff and Lungren, Matthew and Naumann, Tristan and Poon, Hoifung},
18
  author = {Zhang, Sheng and Xu, Yanbo and Usuyama, Naoto and Bagga, Jaspreet and Tinn, Robert and Preston, Sam and Rao, Rajesh and Wei, Mu and Valluri, Naveen and Wong, Cliff and Lungren, Matthew and Naumann, Tristan and Poon, Hoifung},
20
  title = {Large-Scale Domain-Specific Pretraining for Biomedical Vision-Language Processing},
19
  title = {Large-Scale Domain-Specific Pretraining for Biomedical Vision-Language Processing},
21
  publisher = {arXiv},
20
  publisher = {arXiv},
22
  year = {2023},
21
  year = {2023},
23
}
22
}
24
```
23
```
25
24
26
## Model Use
25
## Model Use
27
26
28
### How to use
27
### How to use
29
28
30
Please refer to this [example notebook](https://aka.ms/biomedclip-example-notebook).
29
Please refer to this [example notebook](https://aka.ms/biomedclip-example-notebook).
31
30
32
### Intended Use
31
### Intended Use
33
32
34
This model is intended to be used solely for (I) future research on visual-language processing and (II) reproducibility of the experimental results reported in the reference paper.
33
This model is intended to be used solely for (I) future research on visual-language processing and (II) reproducibility of the experimental results reported in the reference paper.
35
34
36
#### Primary Intended Use
35
#### Primary Intended Use
37
36
38
The primary intended use is to support AI researchers building on top of this work. BiomedCLIP and its associated models should be helpful for exploring various biomedical VLP research questions, especially in the radiology domain.
37
The primary intended use is to support AI researchers building on top of this work. BiomedCLIP and its associated models should be helpful for exploring various biomedical VLP research questions, especially in the radiology domain.
39
38
40
#### Out-of-Scope Use
39
#### Out-of-Scope Use
41
40
42
**Any** deployed use case of the model --- commercial or otherwise --- is currently out of scope. Although we evaluated the models using a broad set of publicly-available research benchmarks, the models and evaluations are not intended for deployed use cases. Please refer to [the associated paper](https://aka.ms/biomedclip-paper) for more details.
41
**Any** deployed use case of the model --- commercial or otherwise --- is currently out of scope. Although we evaluated the models using a broad set of publicly-available research benchmarks, the models and evaluations are not intended for deployed use cases. Please refer to [the associated paper](https://aka.ms/biomedclip-paper) for more details.
43
42
44
## Data
43
## Data
45
44
46
This model builds upon [PMC-15M dataset](https://aka.ms/biomedclip-paper), which is a large-scale parallel image-text dataset for biomedical vision-language processing. It contains 15 million figure-caption pairs extracted from biomedical research articles in PubMed Central. It covers a diverse range of biomedical image types, such as microscopy, radiography, histology, and more.
45
This model builds upon [PMC-15M dataset](https://aka.ms/biomedclip-paper), which is a large-scale parallel image-text dataset for biomedical vision-language processing. It contains 15 million figure-caption pairs extracted from biomedical research articles in PubMed Central. It covers a diverse range of biomedical image types, such as microscopy, radiography, histology, and more.
47
46
48
## Limitations
47
## Limitations
49
48
50
This model was developed using English corpora, and thus can be considered English-only.
49
This model was developed using English corpora, and thus can be considered English-only.
51
50
52
## Further information
51
## Further information
53
52
54
Please refer to the corresponding paper, ["Large-Scale Domain-Specific Pretraining for Biomedical Vision-Language Processing"](https://aka.ms/biomedclip-paper) for additional details on the model training and evaluation.
53
Please refer to the corresponding paper, ["Large-Scale Domain-Specific Pretraining for Biomedical Vision-Language Processing"](https://aka.ms/biomedclip-paper) for additional details on the model training and evaluation.