Diff of /README.md [000000] .. [e9500f]

Switch to unified view

a b/README.md
1
<h1 align="center">Welcome to PathFlowAI </h1>
2
<p>
3
  <img alt="Version" src="https://img.shields.io/badge/version-0.1-blue.svg?cacheSeconds=2592000" />
4
  <a href="https://jlevy44.github.io/PathFlowAI/">
5
    <img alt="Documentation" src="https://img.shields.io/badge/documentation-yes-brightgreen.svg" target="_blank" />
6
  </a>
7
</p>
8
9
> A Convenient High-Throughput Workflow for Preprocessing, Deep Learning Analytics and Interpretation in Digital Pathology
10
11
### 🏠 [Homepage](https://github.com/jlevy44/PathFlowAI)
12
13
Published in the Proceedings of the Pacific Symposium for Biocomputing 2020, Manuscript: https://psb.stanford.edu/psb-online/proceedings/psb20/Levy.pdf
14
15
## Install
16
17
First, install [openslide](https://openslide.org/download/). Note: may need to install libiconv and shapely using conda. Will update with more installation information, please submit issues as well.
18
19
```sh
20
pip install pathflowai
21
install_apex
22
```
23
24
## Usage
25
26
```sh
27
pathflowai-preprocess -h
28
pathflowai-train_model -h
29
pathflowai-monitor -h
30
pathflowai-visualize -h
31
```
32
33
See [Wiki](https://github.com/jlevy44/PathFlowAI/wiki) for more information on setting up and running the workflow. Please submit feedback as issues and let me know if there is any trouble with installation and I am more than happy to provide advice and fixes.
34
35
## Author
36
37
👤 **Joshua Levy**
38
39
* Github: [@jlevy44](https://github.com/jlevy44)
40
41
## 🤝 Contributing
42
43
Contributions, issues and feature requests are welcome!<br />Feel free to check [issues page](https://github.com/jlevy44/PathFlowAI/issues).
44
45
**Figures from the Paper**
46
47
![1](https://user-images.githubusercontent.com/19698023/62230963-0199d780-b391-11e9-96eb-ac9b86686723.jpeg)
48
49
Fig. 1. PathFlowAI Framework: a) Annotations and whole slide images are preprocessed in parallel using
50
Dask; b) Deep learning prediction model is trained on the model; c) Results are visualized; d) UMAP
51
embeddings provide diagnostics; e) SHAP framework is used to find important regions for the prediction
52
53
![2](https://user-images.githubusercontent.com/19698023/62231545-41ad8a00-b392-11e9-8d47-f9f83f4b764a.jpeg)
54
55
Fig. 2. Comparison of PathFlowAI to Preprocessing WSI in Series for: a) Preprocessing time, b) Storage
56
Space, c) Impact on the filesystem. The PathFlowAI method of parallel processing followed by
57
centralized storage saves both time and storage space
58
59
![3](https://user-images.githubusercontent.com/19698023/62231546-41ad8a00-b392-11e9-9b16-ea3b2b92bf3f.jpeg)
60
61
Fig. 3. Segmentation: Original (a) Annotations Compared to Predicted (b) Annotations; (c) Pathologist
62
annotations guided by the classification model
63
64
![4](https://user-images.githubusercontent.com/19698023/62230966-02326e00-b391-11e9-989c-155ff0a9be67.jpeg)
65
66
Fig. 4. Portal Classification Results: a) Darker tiles indicate a higher assigned probability of portal classification, b)
67
AUC-ROC curves for the test images that estimate overall accuracy given different sensitivity cutoffs, c) H&E patch
68
(left) with corresponding SHAP interpretations (right) for four patches; the probability value of portal classification
69
is shown, and on the SHAP value scale, red indicates regions that the model attributes to portal prediction, d) Model trained UMAP embeddings of patches colored by original portal coverage (area of patch covered by portal) as judged
70
by pathologist and visualization of individual patches