|
a |
|
b/README.md |
|
|
1 |
# Multi-modal Longitudinal Representation Learning for Predicting Neoadjuvant Therapy Response in Breast Cancer Treatment |
|
|
2 |
In this study, we design a temporal foundation model, a multi-modal longitudinal representation learning pipeline (MLRL). |
|
|
3 |
We developed MLRL using an in-house longitudinal multi-modal dataset comprising 3,719 breast MRI scans and paired reports. |
|
|
4 |
We also evaluated MLRL system on an international public longitudinal dataset comprising 2,516 exams. |
|
|
5 |
We proposed MLRL in a multi-scale self-supervision scheme, including single-time scale vision-text alignment (VTA) learning and multi-time scale visual/textual progress (TVP/TTP) learning. |
|
|
6 |
Importantly, the TVP/TTP strategy overcomes the limitation of uniformly temporal learning across patients (i.e., the positive-free pairs problem) and enables the extraction of visual changing representations and the textual as well, |
|
|
7 |
facilitating downstream evaluation of tumor progress. |
|
|
8 |
We evaluated the label-efficiency ability of our method by comparing it to several state-of-the-art self-supervised longitudinal learning and multi-modal VL methods. |
|
|
9 |
The results on two longitudinal datasets show that our approach presents excellent generalization capability and brings significant improvements, |
|
|
10 |
with unsupervised temporal progress metrics obtained from TVP/TTP showcasing MLRL ability in distinguishing temporal trends between therapy response populations. |
|
|
11 |
Our MLRL framework enables interpretable visual tracking of progressive areas in temporal examinations with corresponding report aligned, |
|
|
12 |
offering insights into longitudinal VL foundation tools and potentially facilitating the temporal clinical decision-making process. |
|
|
13 |
## Workflow of multi-modal longitudinal representation learning (MLRL) |
|
|
14 |
<img src="https://github.com/yawwG/MLRL/blob/main/src/figures/method1.png"/> |
|
|
15 |
|
|
|
16 |
[comment]: <> (## Overview of proposed temporal progress transformer and multi-scale self-supervised consistent learning) |
|
|
17 |
|
|
|
18 |
[comment]: <> (<img src="https://github.com/yawwG/MLRL/figures/method2.png"/>) |
|
|
19 |
## Longitudinal disease progress tracking and visualization of word-based attention given temporal visual progress embeddings |
|
|
20 |
<img src="https://github.com/yawwG/MLRL/blob/main/src/figures/results1.png"/> |
|
|
21 |
|
|
|
22 |
## Environment Setup |
|
|
23 |
Start by [installing PyTorch 1.8.1](https://pytorch.org/get-started/locally/) with the right CUDA version, then clone this repository and install the dependencies. |
|
|
24 |
|
|
|
25 |
```bash |
|
|
26 |
$ conda install pytorch==1.8.1 torchvision==0.9.1 torchaudio==0.8.1 cudatoolkit=11.1 -c pytorch |
|
|
27 |
$ pip install git@github.com:yawwG/MLRL.git |
|
|
28 |
$ conda env create -f environment.yml |
|
|
29 |
``` |
|
|
30 |
|
|
|
31 |
## Code Description |
|
|
32 |
This codebase has been developed with python version 3.7, PyTorch version 1.8.1, CUDA 11.1 and pytorch-lightning 1.5.9. |
|
|
33 |
Example configurations for pretraining and classification can be found in the `./configs`. |
|
|
34 |
All training and testing are done using the `run.py` script. For more documentation, please run: |
|
|
35 |
|
|
|
36 |
```bash |
|
|
37 |
python run.py --help |
|
|
38 |
``` |
|
|
39 |
|
|
|
40 |
The preprocessing steps for dataset can be found in `datasets` |
|
|
41 |
The dataset using is specified in config.yaml by key("dataset"). |
|
|
42 |
|
|
|
43 |
### Pre-Train MLRL |
|
|
44 |
```bash |
|
|
45 |
python run.py -c configs/MRI_pretrain_config.yaml --train |
|
|
46 |
``` |
|
|
47 |
|
|
|
48 |
### Fine-tune and Test Applications |
|
|
49 |
```bash |
|
|
50 |
python run.py -c configs/MRI_cls_config.yaml --train --test --train_pct 1 & |
|
|
51 |
python run.py -c configs/MRI_cls_config.yaml --train --test --train_pct 0.1 & |
|
|
52 |
python run.py -c configs/MRI_cls_config.yaml --train --test --train_pct 0.05 |
|
|
53 |
``` |
|
|
54 |
|
|
|
55 |
## Contact details |
|
|
56 |
If you have any questions please contact us. |
|
|
57 |
|
|
|
58 |
Email: ritse.mann@radboudumc.nl (Ritse Mann); taotanjs@gmail.com (Tao Tan); y.gao@nki.nl (Yuan Gao) |
|
|
59 |
|
|
|
60 |
Links: [Netherlands Cancer Institute](https://www.nki.nl/), [Radboud University Medical Center](https://www.radboudumc.nl/en/patient-care), and [Maastricht University](https://www.maastrichtuniversity.nl/nl) |
|
|
61 |
|
|
|
62 |
<img src="https://github.com/yawwG/Visualize-what-you-learn/blob/main/figures/NKI.png" width="166.98" height="87.12"/><img src="https://github.com/yawwG/Visualize-what-you-learn/blob/main/figures/RadboudUMC.png" width="231" height="87.12"/><img src="https://github.com/yawwG/Visualize-what-you-learn/blob/main/figures/Maastricht.png" width="237.6" height="87.12"/> |