--- a/README.md
+++ b/README.md
@@ -7,10 +7,9 @@
 1. Fine-tuned Flan-T5 XL and XXL models exhibit superior performance when compared to the traditional BERT model and various GPT models.
 2. The implementation of synthetic data augmentation during the training phase improves model performance and data efficiency.
 3. In a test involving synthetic sentences with altered demographic data, the fine-tuned Flan-T5 models consistently outperformed the GPT models in terms of robustness and overall performance.
-   ![fig1](https://github.com/AIM-Harvard/SDoH/blob/main/resource/fig1.png)
+   ![fig1](https://github.com/AIM-Harvard/SDoH/blob/main/resource/fig1.png?raw=true)
 5. We will make the synthetic training and out-of-domain performance+robustness evaluation datasets available to the broader community for further research and development.
-   ![fig2](https://github.com/AIM-Harvard/SDoH/blob/main/resource/fig3.png)
-
+   ![fig2](https://github.com/AIM-Harvard/SDoH/blob/main/resource/fig3.png?raw=true)
 ## Models
 
 Our research involves the application of two primary models for the classification tasks:
@@ -39,7 +38,7 @@
 
 **If you want to evaluate your model on this,** you should first inference on the ***original sentence***, then use the same model to inference on the ***demographic modified sentences*** for robustness comparisons as shown in the figure below.
 
-![data flow Diagram](https://github.com/AIM-Harvard/SDoH/blob/main/resource/fig2.png)
+![data flow Diagram](https://github.com/AIM-Harvard/SDoH/blob/main/resource/fig2.png?raw=true)
 
 - The code and prompts used for synthetic data generation can be found in the Jupyter notebook `synthetic_data_generation_GPT.ipynb`.
 - JSON files that contain the prompts fed into GPT 3.5 Turbo.