|
a |
|
b/README.md |
|
|
1 |
# chexpert-labeler |
|
|
2 |
CheXpert NLP tool to extract observations from radiology reports. |
|
|
3 |
|
|
|
4 |
Read more about our project [here](https://stanfordmlgroup.github.io/competitions/chexpert/) and our AAAI 2019 paper [here](https://arxiv.org/abs/1901.07031). |
|
|
5 |
|
|
|
6 |
## Prerequisites |
|
|
7 |
|
|
|
8 |
Please install following dependencies or use the Dockerized labeler (see below). |
|
|
9 |
|
|
|
10 |
1. Clone the [NegBio repository](https://github.com/ncbi-nlp/NegBio): |
|
|
11 |
|
|
|
12 |
```Shell |
|
|
13 |
git clone https://github.com/ncbi-nlp/NegBio.git |
|
|
14 |
``` |
|
|
15 |
|
|
|
16 |
2. Add the NegBio directory to your `PYTHONPATH`: |
|
|
17 |
|
|
|
18 |
```Shell |
|
|
19 |
export PYTHONPATH={path to negbio directory}:$PYTHONPATH |
|
|
20 |
``` |
|
|
21 |
|
|
|
22 |
3. Make the virtual environment: |
|
|
23 |
|
|
|
24 |
```Shell |
|
|
25 |
conda env create -f environment.yml |
|
|
26 |
``` |
|
|
27 |
|
|
|
28 |
4. Activate the virtual environment: |
|
|
29 |
|
|
|
30 |
```Shell |
|
|
31 |
conda activate chexpert-label |
|
|
32 |
``` |
|
|
33 |
|
|
|
34 |
5. Install NLTK data: |
|
|
35 |
|
|
|
36 |
```Shell |
|
|
37 |
python -m nltk.downloader universal_tagset punkt wordnet |
|
|
38 |
``` |
|
|
39 |
|
|
|
40 |
6. Download the `GENIA+PubMed` parsing model: |
|
|
41 |
|
|
|
42 |
```python |
|
|
43 |
>>> from bllipparser import RerankingParser |
|
|
44 |
>>> RerankingParser.fetch_and_load('GENIA+PubMed') |
|
|
45 |
``` |
|
|
46 |
|
|
|
47 |
## Usage |
|
|
48 |
Place reports in a headerless, single column csv `{reports_path}`. Each report must be contained in quotes if (1) it contains a comma or (2) it spans multiple lines. See [sample_reports.csv](https://raw.githubusercontent.com/stanfordmlgroup/chexpert-labeler/master/sample_reports.csv) (with output [labeled_reports.csv](https://raw.githubusercontent.com/stanfordmlgroup/chexpert-labeler/master/labeled_reports.csv))for an example. |
|
|
49 |
|
|
|
50 |
```Shell |
|
|
51 |
python label.py --reports_path {reports_path} |
|
|
52 |
``` |
|
|
53 |
|
|
|
54 |
Run `python label.py --help` for descriptions of all of the command-line arguments. |
|
|
55 |
|
|
|
56 |
### Dockerized Labeler |
|
|
57 |
|
|
|
58 |
```sh |
|
|
59 |
docker build -t chexpert-labeler:latest . |
|
|
60 |
docker run -v $(pwd):/data chexpert-labeler:latest \ |
|
|
61 |
python label.py --reports_path /data/sample_reports.csv --output_path /data/labeled_reports.csv --verbose |
|
|
62 |
``` |
|
|
63 |
|
|
|
64 |
## Contributions |
|
|
65 |
This repository builds upon the work of [NegBio](https://negbio.readthedocs.io/en/latest/). |
|
|
66 |
|
|
|
67 |
This tool was developed by Jeremy Irvin, Pranav Rajpurkar, Michael Ko, Yifan Yu, and Silviana Ciurea-Ilcus. |
|
|
68 |
|
|
|
69 |
## Citing |
|
|
70 |
If you're using the CheXpert labeling tool, please cite [this paper](https://arxiv.org/abs/1901.07031): |
|
|
71 |
|
|
|
72 |
``` |
|
|
73 |
@inproceedings{irvin2019chexpert, |
|
|
74 |
title={CheXpert: A large chest radiograph dataset with uncertainty labels and expert comparison}, |
|
|
75 |
author={Irvin, Jeremy and Rajpurkar, Pranav and Ko, Michael and Yu, Yifan and Ciurea-Ilcus, Silviana and Chute, Chris and Marklund, Henrik and Haghgoo, Behzad and Ball, Robyn and Shpanskaya, Katie and others}, |
|
|
76 |
booktitle={Thirty-Third AAAI Conference on Artificial Intelligence}, |
|
|
77 |
year={2019} |
|
|
78 |
} |
|
|
79 |
``` |