a/README.md b/README.md
1
<div align="center">
1
<div align="center">
2
2
3
<h2 style="border-bottom: 1px solid lightgray;">Visual Decoding and Reconstruction via EEG Embeddings with Guided Diffusion</h2>
3
<h2 style="border-bottom: 1px solid lightgray;">Visual Decoding and Reconstruction via EEG Embeddings with Guided Diffusion</h2>
4
4
5
<!-- Badges and Links Section -->
5
6
<div style="display: flex; align-items: center; justify-content: center;">
7
8
<p align="center">
9
  <a href="#">
10
  <p align="center">
11
    <a href='https://arxiv.org/pdf/2403.07721'><img src='http://img.shields.io/badge/Paper-arxiv.2403.07721-B31B1B.svg'></a>
12
    <a href='https://huggingface.co/datasets/LidongYang/EEG_Image_decode/tree/main'><img src='https://img.shields.io/badge/EEG Image decode-%F0%9F%A4%97%20Hugging%20Face-blue'></a>
13
  </p>
14
</p>
15
16
17
</div>
18
19
<br/>
6
<br/>
20
7
21
</div>
8
</div>
22
9
23
<!-- 
10
<!-- 
24
<img src="bs=16_test_acc.png" alt="Framework" style="max-width: 90%; height: auto;"/> -->
11
<img src="bs=16_test_acc.png" alt="Framework" style="max-width: 90%; height: auto;"/> -->
25
<!-- 
12
<!-- 
26
<img src="test_acc.png" alt="Framework" style="max-width: 90%; height: auto;"/> -->
13
<img src="test_acc.png" alt="Framework" style="max-width: 90%; height: auto;"/> -->
27
14
28
<!-- As the training epochs increases, the test set accuracy of different methods. (Top: batchsize is 16. Bottom: batchsize is 1024) -->
15
<!-- As the training epochs increases, the test set accuracy of different methods. (Top: batchsize is 16. Bottom: batchsize is 1024) -->
29
16
30
<!-- 
17
<!-- 
31
<img src="temporal_analysis.png" alt="Framework" style="max-width: 90%; height: auto;"/>
18
<img src="temporal_analysis.png" alt="Framework" style="max-width: 90%; height: auto;"/>
32
Examples of growing window image reconstruction with 5 different random seeds. -->
19
Examples of growing window image reconstruction with 5 different random seeds. -->
33
20
34
21
35
<img src="fig-framework.png" alt="Framework" style="max-width: 100%; height: auto;"/>
22
<img src="fig-framework.png" alt="Framework" style="max-width: 100%; height: auto;"/>
36
Framework of our proposed method.
23
Framework of our proposed method.
37
24
38
25
39
26
40
27
41
<!--  -->
28
<!--  -->
42
<img src="fig-genexample.png" alt="fig-genexample" style="max-width: 90%; height: auto;"/>  
29
<img src="fig-genexample.png" alt="fig-genexample" style="max-width: 90%; height: auto;"/>  
43
30
44
Some examples of using EEG to reconstruct stimulus images.
31
Some examples of using EEG to reconstruct stimulus images.
45
32
46
33
47
## News:
34
## News:
48
- [2024/09/26] Our paper is accepted to **NeurIPS 2024**.
35
- [2024/09/26] Our paper is accepted to **NeurIPS 2024**.
49
- [2024/09/25] We have updated the [arxiv](https://arxiv.org/abs/2403.07721) paper.
36
- [2024/09/25] We have updated the [arxiv](https://arxiv.org/abs/2403.07721) paper.
50
- [2024/08/01] Update scripts for training and inference in different tasks.
37
- [2024/08/01] Update scripts for training and inference in different tasks.
51
- [2024/05/19] Update the dataset loading scripts.
38
- [2024/05/19] Update the dataset loading scripts.
52
- [2024/03/12] The [arxiv](https://arxiv.org/abs/2403.07721) paper is available.
39
- [2024/03/12] The [arxiv](https://arxiv.org/abs/2403.07721) paper is available.
53
40
54
41
55
<!-- ## Environment setup -->
42
<!-- ## Environment setup -->
56
<h2 style="border-bottom: 1px solid lightgray; margin-bottom: 5px;">Environment setup</h2>
43
<h2 style="border-bottom: 1px solid lightgray; margin-bottom: 5px;">Environment setup</h2>
57
44
58
Run ``setup.sh`` to quickly create a conda environment that contains the packages necessary to run our scripts; activate the environment with conda activate BCI.
45
Run ``setup.sh`` to quickly create a conda environment that contains the packages necessary to run our scripts; activate the environment with conda activate BCI.
59
46
60
47
61
48
62
49
63
```
50
```
64
. setup.sh
51
. setup.sh
65
```
52
```
66
You can also create a new conda environment and install the required dependencies by running
53
You can also create a new conda environment and install the required dependencies by running
67
```
54
```
68
conda env create -f environment.yml
55
conda env create -f environment.yml
69
conda activate BCI
56
conda activate BCI
70
57
71
pip install wandb
58
pip install wandb
72
pip install einops
59
pip install einops
73
```
60
```
74
Additional environments needed to run all the code:
61
Additional environments needed to run all the code:
75
```
62
```
76
pip install open_clip_torch
63
pip install open_clip_torch
77
64
78
pip install transformers==4.28.0.dev0
65
pip install transformers==4.28.0.dev0
79
pip install diffusers==0.24.0
66
pip install diffusers==0.24.0
80
67
81
#Below are the braindecode installation commands for the most common use cases.
68
#Below are the braindecode installation commands for the most common use cases.
82
pip install braindecode==0.8.1
69
pip install braindecode==0.8.1
83
```
70
```
84
<!-- ## Quick training and test  -->
71
<!-- ## Quick training and test  -->
85
<h2 style="border-bottom: 1px solid lightgray; margin-bottom: 5px;">Quick training and test</h2>
72
<h2 style="border-bottom: 1px solid lightgray; margin-bottom: 5px;">Quick training and test</h2>
86
73
87
If you want to quickly reproduce the results in the paper, please download the relevant ``preprocessed data`` and ``model weights`` from [Hugging Face](https://huggingface.co/datasets/LidongYang/EEG_Image_decode) first.
74
If you want to quickly reproduce the results in the paper, please download the relevant ``preprocessed data`` and ``model weights`` from [Hugging Face](https://huggingface.co/datasets/LidongYang/EEG_Image_decode) first.
88
#### 1.Image Retrieval
75
#### 1.Image Retrieval
89
We provide the script to learn the training strategy of EEG Encoder and verify it during training. Please modify your data set path and run:
76
We provide the script to learn the training strategy of EEG Encoder and verify it during training. Please modify your data set path and run:
90
```
77
```
91
cd Retrieval/
78
cd Retrieval/
92
python ATMS_retrieval.py --logger True --gpu cuda:0  --output_dir ./outputs/contrast
79
python ATMS_retrieval.py --logger True --gpu cuda:0  --output_dir ./outputs/contrast
93
```
80
```
94
We also provide the script for ``joint subject training``, which aims to train all subjects jointly and test on a specific subject:
81
We also provide the script for ``joint subject training``, which aims to train all subjects jointly and test on a specific subject:
95
```
82
```
96
cd Retrieval/
83
cd Retrieval/
97
python ATMS_retrieval_joint_train.py --joint_train --sub sub-01 True --logger True --gpu cuda:0  --output_dir ./outputs/contrast
84
python ATMS_retrieval_joint_train.py --joint_train --sub sub-01 True --logger True --gpu cuda:0  --output_dir ./outputs/contrast
98
```
85
```
99
86
100
Additionally, replicating the results of other methods (e.g. EEGNetV4) by run
87
Additionally, replicating the results of other methods (e.g. EEGNetV4) by run
101
```
88
```
102
cd Retrieval/
89
cd Retrieval/
103
contrast_retrieval.py --encoder_type EEGNetv4_Encoder --epochs 30 --batch_size 1024
90
contrast_retrieval.py --encoder_type EEGNetv4_Encoder --epochs 30 --batch_size 1024
104
```
91
```
105
92
106
#### 2.Image Reconstruction
93
#### 2.Image Reconstruction
107
We provide quick training and inference scripts for ``clip pipeline`` of visual reconstruction. Please modify your data set path and run zero-shot on 200 classes test dataset:
94
We provide quick training and inference scripts for ``clip pipeline`` of visual reconstruction. Please modify your data set path and run zero-shot on 200 classes test dataset:
108
```
95
```
109
# Train and generate eeg features in Subject 8
96
# Train and generate eeg features in Subject 8
110
cd Generation/
97
cd Generation/
111
python ATMS_reconstruction.py --insubject True --subjects sub-08 --logger True \
98
python ATMS_reconstruction.py --insubject True --subjects sub-08 --logger True \
112
--gpu cuda:0  --output_dir ./outputs/contrast
99
--gpu cuda:0  --output_dir ./outputs/contrast
113
```
100
```
114
101
115
```
102
```
116
# Reconstruct images in Subject 8
103
# Reconstruct images in Subject 8
117
Generation_metrics_sub8.ipynb
104
Generation_metrics_sub8.ipynb
118
```
105
```
119
106
120
We also provide scripts for image reconstruction combined ``with the low level pipeline``.
107
We also provide scripts for image reconstruction combined ``with the low level pipeline``.
121
```
108
```
122
cd Generation/
109
cd Generation/
123
110
124
# step 1: train vae encoder and then generate low level images
111
# step 1: train vae encoder and then generate low level images
125
train_vae_latent_512_low_level_no_average.py
112
train_vae_latent_512_low_level_no_average.py
126
113
127
# step 2: load low level images and then reconstruct them
114
# step 2: load low level images and then reconstruct them
128
1x1024_reconstruct_sdxl.ipynb
115
1x1024_reconstruct_sdxl.ipynb
129
```
116
```
130
117
131
118
132
We provide scripts for caption generation combined ``with the semantic level pipeline``.
119
We provide scripts for caption generation combined ``with the semantic level pipeline``.
133
```
120
```
134
cd Generation/
121
cd Generation/
135
122
136
# step 1: train feature adapter
123
# step 1: train feature adapter
137
image_adapter.ipynb
124
image_adapter.ipynb
138
125
139
# step 2: get caption from eeg latent
126
# step 2: get caption from eeg latent
140
GIT_caption_batch.ipynb
127
GIT_caption_batch.ipynb
141
128
142
# step 3: load text prompt and then reconstruct images
129
# step 3: load text prompt and then reconstruct images
143
1x1024_reconstruct_sdxl.ipynb
130
1x1024_reconstruct_sdxl.ipynb
144
```
131
```
145
132
146
To evaluate the quality of the reconstructed images, modify the paths of the reconstructed images and the original stimulus images in the notebook and run:
133
To evaluate the quality of the reconstructed images, modify the paths of the reconstructed images and the original stimulus images in the notebook and run:
147
```
134
```
148
#compute metrics, cited from MindEye
135
#compute metrics, cited from MindEye
149
Reconstruction_Metrics_ATM.ipynb
136
Reconstruction_Metrics_ATM.ipynb
150
```
137
```
151
138
152
<!-- ## Data availability -->
139
<!-- ## Data availability -->
153
<h2 style="border-bottom: 1px solid lightgray; margin-bottom: 5px;">Data availability</h2>
140
<h2 style="border-bottom: 1px solid lightgray; margin-bottom: 5px;">Data availability</h2>
154
141
155
We provide you with the ``preprocessed EEG`` and ``preprocessed MEG`` data used in our paper at [Hugging Face](https://huggingface.co/datasets/LidongYang/EEG_Image_decode), as well as the raw image data.
142
We provide you with the ``preprocessed EEG`` and ``preprocessed MEG`` data used in our paper at [Hugging Face](https://huggingface.co/datasets/LidongYang/EEG_Image_decode), as well as the raw image data.
156
143
157
144
158
Note that the experimental paradigms of the THINGS-EEG and THINGS-MEG datasets themselves are different, so we will provide images and data for the two datasets separately.
145
Note that the experimental paradigms of the THINGS-EEG and THINGS-MEG datasets themselves are different, so we will provide images and data for the two datasets separately.
159
146
160
You can also download the relevant THINGS-EEG data set and THINGS-MEG data set at osf.io.
147
You can also download the relevant THINGS-EEG data set and THINGS-MEG data set at osf.io.
161
148
162
The raw and preprocessed EEG dataset, the training and test images are available on [osf](https://osf.io/3jk45/).
149
The raw and preprocessed EEG dataset, the training and test images are available on [osf](https://osf.io/3jk45/).
163
- ``Raw EEG data:`` `../project_directory/eeg_dataset/raw_data/`.
150
- ``Raw EEG data:`` `../project_directory/eeg_dataset/raw_data/`.
164
- ``Preprocessed EEG data:`` `../project_directory/eeg_dataset/preprocessed_data/`.
151
- ``Preprocessed EEG data:`` `../project_directory/eeg_dataset/preprocessed_data/`.
165
- ``Training and test images:`` `../project_directory/image_set/`.
152
- ``Training and test images:`` `../project_directory/image_set/`.
166
153
167
154
168
The raw and preprocessed MEG dataset, the training and test images are available on [OpenNEURO](https://openneuro.org/datasets/ds004212/versions/2.0.0).
155
The raw and preprocessed MEG dataset, the training and test images are available on [OpenNEURO](https://openneuro.org/datasets/ds004212/versions/2.0.0).
169
156
170
157
171
158
172
159
173
160
174
<!-- ## EEG/MEG preprocessing -->
161
<!-- ## EEG/MEG preprocessing -->
175
<h2 style="border-bottom: 1px solid lightgray; margin-bottom: 5px;">EEG/MEG preprocessing</h2>
162
<h2 style="border-bottom: 1px solid lightgray; margin-bottom: 5px;">EEG/MEG preprocessing</h2>
176
163
177
164
178
Modify your path and execute the following code to perform the same preprocessing on the raw data as in our experiment:
165
Modify your path and execute the following code to perform the same preprocessing on the raw data as in our experiment:
179
```
166
```
180
cd EEG-preprocessing/
167
cd EEG-preprocessing/
181
python EEG-preprocessing/preprocessing.py
168
python EEG-preprocessing/preprocessing.py
182
```
169
```
183
170
184
```
171
```
185
cd MEG-preprocessing/
172
cd MEG-preprocessing/
186
MEG-preprocessing/pre_possess.ipynb
173
MEG-preprocessing/pre_possess.ipynb
187
```
174
```
188
Also You can get the data set used in this project through the BaiduNetDisk [link](https://pan.baidu.com/s/1-1hgpoi4nereLVqE4ylE_g?pwd=nid5) to run the code.
175
Also You can get the data set used in this project through the BaiduNetDisk [link](https://pan.baidu.com/s/1-1hgpoi4nereLVqE4ylE_g?pwd=nid5) to run the code.
189
176
190
## TODO
177
## TODO
191
- [√] Release retrieval and reconstruction scripts.
178
- [√] Release retrieval and reconstruction scripts.
192
- [√] Update training scripts of reconstruction pipeline.
179
- [√] Update training scripts of reconstruction pipeline.
193
- [ ] Adding validation sets improves performance evaluation accuracy.
180
- [ ] Adding validation sets improves performance evaluation accuracy.
194
181
195
182
196
183
197
<!-- ## Acknowledge -->
184
<!-- ## Acknowledge -->
198
<h2 style="border-bottom: 1px solid lightgray; margin-bottom: 5px;">Acknowledge</h2>
185
<h2 style="border-bottom: 1px solid lightgray; margin-bottom: 5px;">Acknowledge</h2>
199
186
200
1.Thanks to Y Song et al. for their contribution in data set preprocessing and neural network structure, we refer to their work:</br>"[Decoding Natural Images from EEG for Object Recognition](https://arxiv.org/pdf/2308.13234.pdf)".</br> Yonghao Song, Bingchuan Liu, Xiang Li, Nanlin Shi, Yijun Wang, and Xiaorong Gao. 
187
1.Thanks to Y Song et al. for their contribution in data set preprocessing and neural network structure, we refer to their work:</br>"[Decoding Natural Images from EEG for Object Recognition](https://arxiv.org/pdf/2308.13234.pdf)".</br> Yonghao Song, Bingchuan Liu, Xiang Li, Nanlin Shi, Yijun Wang, and Xiaorong Gao. 
201
188
202
2.We also thank the authors of [SDRecon](https://github.com/yu-takagi/StableDiffusionReconstruction) for providing the codes and the results. Some parts of the training script are based on [MindEye](https://medarc-ai.github.io/mindeye/) and [MindEye2](https://github.com/MedARC-AI/MindEyeV2). Thanks for the awesome research works.
189
2.We also thank the authors of [SDRecon](https://github.com/yu-takagi/StableDiffusionReconstruction) for providing the codes and the results. Some parts of the training script are based on [MindEye](https://medarc-ai.github.io/mindeye/) and [MindEye2](https://github.com/MedARC-AI/MindEyeV2). Thanks for the awesome research works.
203
190
204
3.Here we provide our THING-EEG dataset cited in the paper:</br>"[A large and rich EEG dataset for modeling human visual object recognition](https://www.sciencedirect.com/science/article/pii/S1053811922008758?via%3Dihub)".</br>
191
3.Here we provide our THING-EEG dataset cited in the paper:</br>"[A large and rich EEG dataset for modeling human visual object recognition](https://www.sciencedirect.com/science/article/pii/S1053811922008758?via%3Dihub)".</br>
205
Alessandro T. Gifford, Kshitij Dwivedi, Gemma Roig, Radoslaw M. Cichy.
192
Alessandro T. Gifford, Kshitij Dwivedi, Gemma Roig, Radoslaw M. Cichy.
206
193
207
194
208
4.Another used THINGS-MEG data set provides a reference:</br>"[THINGS-data, a multimodal collection of large-scale datasets for investigating object representations in human brain and behavior.](https://elifesciences.org/articles/82580.pdf)".</br> Hebart, Martin N., Oliver Contier, Lina Teichmann, Adam H. Rockter, Charles Y. Zheng, Alexis Kidder, Anna Corriveau, Maryam Vaziri-Pashkam, and Chris I. Baker.
195
4.Another used THINGS-MEG data set provides a reference:</br>"[THINGS-data, a multimodal collection of large-scale datasets for investigating object representations in human brain and behavior.](https://elifesciences.org/articles/82580.pdf)".</br> Hebart, Martin N., Oliver Contier, Lina Teichmann, Adam H. Rockter, Charles Y. Zheng, Alexis Kidder, Anna Corriveau, Maryam Vaziri-Pashkam, and Chris I. Baker.
209
196
210
197
211
198
212
<!-- ## Citation -->
199
<!-- ## Citation -->
213
<h2 style="border-bottom: 1px solid lightgray; margin-bottom: 5px;">Citation</h2>
200
<h2 style="border-bottom: 1px solid lightgray; margin-bottom: 5px;">Citation</h2>
214
201
215
```bibtex
202
```bibtex
216
@inproceedings{
203
@inproceedings{
217
li2024visual,
204
li2024visual,
218
title={Visual Decoding and Reconstruction via {EEG} Embeddings with Guided Diffusion},
205
title={Visual Decoding and Reconstruction via {EEG} Embeddings with Guided Diffusion},
219
author={Dongyang Li and Chen Wei and Shiying Li and Jiachen Zou and Quanying Liu},
206
author={Dongyang Li and Chen Wei and Shiying Li and Jiachen Zou and Quanying Liu},
220
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
207
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
221
year={2024},
208
year={2024},
222
url={https://openreview.net/forum?id=RxkcroC8qP}
209
url={https://openreview.net/forum?id=RxkcroC8qP}
223
}
210
}
224
211
225
@article{li2024visual,
212
@article{li2024visual,
226
  title={Visual Decoding and Reconstruction via EEG Embeddings with Guided Diffusion},
213
  title={Visual Decoding and Reconstruction via EEG Embeddings with Guided Diffusion},
227
  author={Li, Dongyang and Wei, Chen and Li, Shiying and Zou, Jiachen and Liu, Quanying},
214
  author={Li, Dongyang and Wei, Chen and Li, Shiying and Zou, Jiachen and Liu, Quanying},
228
  journal={arXiv preprint arXiv:2403.07721},
215
  journal={arXiv preprint arXiv:2403.07721},
229
  year={2024}
216
  year={2024}
230
}
217
}
231
```
218
```