Diff of /README.md [000000] .. [c3444c]

Switch to unified view

a b/README.md
1
# CODER
2
![CODER](img/1.png)
3
4
CODER: Knowledge infused cross-lingual medical term embedding for term normalization. [Paper](http://arxiv.org/abs/2011.02947)
5
6
![CODER++](img/coder++.png)
7
8
CODER++: Automatic Biomedical Term Clustering by Learning Fine-grained Term Representations. [Paper](https://arxiv.org/abs/2204.00391)
9
10
# Use the model by transformers
11
Models have been uploaded to huggingface/transformers repo.
12
13
```python
14
from transformers import AutoTokenizer, AutoModel
15
16
tokenizer = AutoTokenizer.from_pretrained("GanjinZero/UMLSBert_ENG")
17
model = AutoModel.from_pretrained("GanjinZero/UMLSBert_ENG")
18
```
19
English checkpoint: **GanjinZero/coder_eng** or GanjinZero/UMLSBert_ENG (old name)
20
21
English checkpoint CODER++: **GanjinZero/coder_eng_pp** (with hard negative sampling)
22
<!-- Please try to use transformers 3.4.0 to load CODER++, we find the model loaded in transformers 4.12.0 behave differently! -->
23
24
Multilingual checkpoint: **GanjinZero/coder_all** ~~or GanjinZero/UMLSBert_ALL  (discarded old name)~~
25
26
# Train your model
27
```shell
28
cd pretrain
29
python train.py --umls_dir your_umls_dir --model_name_or_path monologg/biobert_v1.1_pubmed
30
```
31
your_umls_dir should contain **MRCONSO.RRF**, **MRREL.RRF** and **MRSTY.RRF**.
32
UMLS Download path:[UMLS](https://www.nlm.nih.gov/research/umls/licensedcontent/umlsarchives04.html#2020AA).
33
34
# A small tool for load UMLS RRF
35
```python
36
from pretrain.load_umls import UMLS
37
umls = UMLS(your_umls_dir)
38
```
39
40
# Test CODER or other embeddings
41
## CADEC
42
```shell
43
cd test
44
python cadec/cadec_eval.py bert_model_name_or_path
45
python cadec/cadec_eval.py word_embedding_path
46
```
47
48
## MANTRA GSC
49
Download [the Mantra GSC](https://files.ifi.uzh.ch/cl/mantra/gsc/GSC-v1.1.zip) and unzip the xml files to /test/mantra/dataset, run
50
```
51
cd test/mantra
52
python test.py
53
```
54
55
## MCSM
56
```shell
57
cd test/embeddings_reimplement
58
python mcsm.py
59
```
60
61
## DDBRC
62
Only sampled data is provided.
63
```shell
64
cd test/diseasedb
65
python train.py your_embedding embedding_type freeze_or_not gpu_id
66
```
67
- embedding_type should be in [bert, word, cui]
68
- freeze_or_not should be in [T, F], T means freeze the embedding, and F means fine-tune the embedding
69
70
# Citation
71
```bibtex
72
@article{YUAN2022103983,
73
title = {CODER: Knowledge-infused cross-lingual medical term embedding for term normalization},
74
journal = {Journal of Biomedical Informatics},
75
pages = {103983},
76
year = {2022},
77
issn = {1532-0464},
78
doi = {https://doi.org/10.1016/j.jbi.2021.103983},
79
url = {https://www.sciencedirect.com/science/article/pii/S1532046421003129},
80
author = {Zheng Yuan and Zhengyun Zhao and Haixia Sun and Jiao Li and Fei Wang and Sheng Yu},
81
keywords = {medical term normalization, cross-lingual, medical term representation, knowledge graph embedding, contrastive learning}
82
}
83
```
84
85
```bibtex
86
@inproceedings{zeng-etal-2022-automatic,
87
    title = "Automatic Biomedical Term Clustering by Learning Fine-grained Term Representations",
88
    author = "Zeng, Sihang  and
89
      Yuan, Zheng  and
90
      Yu, Sheng",
91
    booktitle = "Proceedings of the 21st Workshop on Biomedical Language Processing",
92
    month = may,
93
    year = "2022",
94
    address = "Dublin, Ireland",
95
    publisher = "Association for Computational Linguistics",
96
    url = "https://aclanthology.org/2022.bionlp-1.8",
97
    pages = "91--96",
98
    abstract = "Term clustering is important in biomedical knowledge graph construction. Using similarities between terms embedding is helpful for term clustering. State-of-the-art term embeddings leverage pretrained language models to encode terms, and use synonyms and relation knowledge from knowledge graphs to guide contrastive learning. These embeddings provide close embeddings for terms belonging to the same concept. However, from our probing experiments, these embeddings are not sensitive to minor textual differences which leads to failure for biomedical term clustering. To alleviate this problem, we adjust the sampling strategy in pretraining term embeddings by providing dynamic hard positive and negative samples during contrastive learning to learn fine-grained representations which result in better biomedical term clustering. We name our proposed method as CODER++, and it has been applied in clustering biomedical concepts in the newly released Biomedical Knowledge Graph named BIOS.",
99
}
100
```