Diff of /README.md [000000] .. [dc40d0]

Switch to unified view

a b/README.md
1
# Cost-effective Instruction Learning for Pathology Vision and Language Analysis (CLOVER)
2
3
The advent of vision-language models fosters the interactive conversations between AI-enabled models and humans. Yet applying these models into clinics must deal with daunting challenges around large-scale training data, financial, and computational resources. Here we propose a cost-effective instruction learning framework for conversational pathology named as CLOVER. CLOVER only trains a lightweight module and uses instruction tuning while freezing the parameters of the large language model. Instead of using costly GPT-4, we propose well-designed prompts on GPT-3.5 for building generation-based instructions, emphasizing the utility of pathological knowledge derived from the Internet source. To augment the use of instructions, we construct a high-quality set of template-based instructions in the context of digital pathology. From two benchmark datasets, our findings reveal the strength of hybrid-form instructions in the visual question-answer in pathology. Extensive results show the cost-effectiveness of CLOVER in answering both open-ended and closed-ended questions, where CLOVER outperforms strong baselines that possess 37 times more training parameters and use instruction data generated from GPT-4. Through the instruction tuning, CLOVER exhibits robustness of few-shot learning in the external clinical dataset. These findings demonstrate that cost-effective modeling of CLOVER could accelerate the adoption of rapid conversational applications in the landscape of digital pathology.
4
5
6
7
8
9
10
11
12
13
## Release
14
- Checkpoints and instruction dataset will be released soon. 
15
 
16
17
18
## Workflow of CLOVER
19
20
<p align="center">
21
    <img src="imgs/image.png" width="90%"> <br>
22
 
23
  *CLOVER employs the training framework of BLIP-2 to achieve a fast domain tuning with lightweight parameters. The entire training process of CLOVER includes two major stages: (i) alignment of vision and language and (ii) supervised fine-tuning with instructions. The alignment compels the model to acquire valuable representations between vision and language. Instruction fine-tuning is vital here for activating LLMs to excel in visual language question answering. Stage 1 requires inputs of image-text pairs, where we use the large-scale Quilt-1M dataset. Stage 2 demands domain-specific instruction data. As we have seen a significant lack of the required instruction data in the literature, we propose a low-cost solution of instruction data generation carefully designed for analyzing pathological data.*
24
</p>
25
26
27
28
## Contents
29
- [Cost-effective Instruction Learning for Pathology Vision and Language Analysis (CLOVER)](#cost-effective-instruction-learning-for-pathology-vision-and-language-analysis-clover)
30
  - [Release](#release)
31
  - [Workflow of CLOVER](#workflow-of-clover)
32
  - [Contents](#contents)
33
    - [Data Download](#data-download)
34
    - [Installation](#installation)
35
    - [Training](#training)
36
    - [Inference](#inference)
37
  - [Case Study](#case-study)
38
  - [Related Projects](#related-projects)
39
40
41
42
43
### Data Download
44
- Stage 1: Quilt-1M dataset can be downloaded from [Google](https://docs.google.com/forms/d/e/1FAIpQLSdSe06DIbPn71jA2rCxe_5tUPfyHhSH1Z7ZTJBxWM26cnpZFg/viewform) or [Zenodo](https://zenodo.org/records/8239942).
45
- Stage 2: CLOVER Instructions will be released. Of course, you can also use our prompt to generate the data from [PY FILE](./generate_instructions.py) if you want.
46
47
48
### Installation
49
50
1. Creating conda environment
51
```bash
52
conda create -n clover python=3.9
53
conda activate clover
54
```
55
56
2. Building from source
57
```bash
58
git clone https://github.com/JLINEkai/CLOVER.git
59
cd CLOVER
60
pip install -r requirements.txt
61
```
62
63
64
### Training
65
- Stage 1 (Alignment): 
66
```bash
67
python train_blip2qformer.py
68
```
69
- Stage 2 (Instruction finetuning): 
70
  
71
You can choose large language model (LLM) in [FILE](.\lavis\projects\blip2\train\pretrain_stage2.yaml). We provide FlanT5XL and Vicuna 7B.
72
```bash
73
python -m torch.distributed.run --nproc_per_node=1 train.py 
74
````
75
76
### Inference
77
78
```bash
79
python -m torch.distributed.run --nproc_per_node=1 evaluate.py --cfg-path lavis/projects/blip2/eval/vqav2_zeroshot_flant5xl_eval.yaml
80
````
81
82
83
## Case Study
84
85
<p align="center">
86
    <img src="imgs/case1.png" width="90%"> <br>
87
 
88
  *Qualitative comparisons of visual question answering on QUILT-VQA. (Image source: QUILT-VQA)*
89
</p>
90
91
<p align="center">
92
    <img src="imgs/case2.png" width="90%"> <br>
93
 
94
  *Qualitative comparisons of visual question answering on LLaVA-Med-17K. (Image source: [link](https://www.ncbi.nlm.nih.gov/pubmed/26147524))*
95
</p>
96
97
If you have any questions, please send an email to chenkaitao@pjlab.org.cn.
98
99
## Related Projects
100
- Our model is based on BLIP-2 [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://github.com/salesforce/LAVIS/tree/main)
101
102
103
104