Diff of /README.md [000000] .. [259458]

Switch to unified view

a b/README.md
1
<p align="center">
2
  <a href="https://github.com/SuperBruceJia/EEG-DL"> <img width="500px" src="https://github.com/SuperBruceJia/EEG-DL/raw/master/Logo.png"></a> 
3
  <br />
4
  <br />
5
  <a href="https://gitter.im/EEG-DL/community"><img alt="Chat on Gitter" src="https://img.shields.io/gitter/room/nwjs/nw.js.svg" /></a>
6
  <a href="https://www.anaconda.com/"><img alt="Python Version" src="https://img.shields.io/badge/Python-3.x-green.svg" /></a>
7
  <a href="https://www.tensorflow.org/install"><img alt="TensorFlow Version" src="https://img.shields.io/badge/TensorFlow-1.13.1-red.svg" /></a>
8
  <a href="https://github.com/SuperBruceJia/EEG-DL/blob/master/LICENSE"><img alt="MIT License" src="https://img.shields.io/badge/license-MIT-blue.svg" /></a>
9
</p>
10
11
<!-- <div align="center">
12
    <a href="https://github.com/SuperBruceJia/EEG-DL"> <img width="500px" src="https://github.com/SuperBruceJia/EEG-DL/raw/master/Logo.png"></a> 
13
</div> -->
14
15
--------------------------------------------------------------------------------
16
17
# Welcome to EEG Deep Learning Library
18
19
**EEG-DL** is a Deep Learning (DL) library written by [TensorFlow](https://www.tensorflow.org) for EEG Tasks (Signals) Classification. It provides the latest DL algorithms and keeps updated. 
20
21
<!-- [![Gitter](https://img.shields.io/gitter/room/nwjs/nw.js.svg)](https://gitter.im/EEG-DL/community)
22
[![Python 3](https://img.shields.io/badge/Python-3.x-green.svg)](https://www.anaconda.com/)
23
[![TensorFlow 1.13.1](https://img.shields.io/badge/TensorFlow-1.13.1-red.svg)](https://www.tensorflow.org/install)
24
[![License](https://img.shields.io/badge/license-MIT-blue.svg)](https://github.com/SuperBruceJia/EEG-DL/blob/master/LICENSE) -->
25
26
## Table of Contents
27
<ul>
28
<li><a href="#Documentation">Documentation</a></li>
29
<li><a href="#Usage-Demo">Usage Demo</a></li>
30
<li><a href="#Notice">Notice</a></li>
31
<li><a href="#Research-Ideas">Research Ideas</a></li>
32
<li><a href="#Common-Issues">Common Issues</a></li>
33
<li><a href="#Structure-of-the-Code">Structure of the Code</a></li>
34
<li><a href="#Citation">Citation</a></li>
35
<li><a href="#Other-Useful-Resources">Other Useful Resources</a></li>
36
<li><a href="#Contribution">Contribution</a></li>
37
<li><a href="#Organizations">Organizations</a></li>
38
</ul>
39
40
## Documentation
41
**The supported models** include
42
43
| No.   | Model                                                  | Codes           |
44
| :----:| :----:                                                 | :----:          |
45
| 1     | Deep Neural Networks                                   | [DNN](https://github.com/SuperBruceJia/EEG-DL/blob/master/Models/Network/DNN.py) |
46
| 2     | Convolutional Neural Networks [[Paper]](https://iopscience.iop.org/article/10.1088/1741-2552/ab4af6/meta) [[Tutorial]](https://github.com/SuperBruceJia/EEG-Motor-Imagery-Classification-CNNs-TensorFlow)| [CNN](https://github.com/SuperBruceJia/EEG-DL/blob/master/Models/Network/CNN.py) |
47
| 3     | Deep Residual Convolutional Neural Networks [[Paper]](https://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/He_Deep_Residual_Learning_CVPR_2016_paper.pdf) | [ResNet](https://github.com/SuperBruceJia/EEG-DL/blob/master/Models/Network/ResCNN.py) |
48
| 4     | Thin Residual Convolutional Neural Networks [[Paper]](https://arxiv.org/abs/1902.10107) | [Thin ResNet](https://github.com/SuperBruceJia/EEG-DL/blob/master/Models/Network/Thin_ResNet.py) |
49
| 5     | Densely Connected Convolutional Neural Networks [[Paper]](https://arxiv.org/abs/1608.06993) | [DenseNet](https://github.com/SuperBruceJia/EEG-DL/blob/master/Models/Network/DenseCNN.py) |
50
| 6     | Fully Convolutional Neural Networks [[Paper]](https://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Long_Fully_Convolutional_Networks_2015_CVPR_paper.pdf) | [FCN](https://github.com/SuperBruceJia/EEG-DL/blob/master/Models/Network/Fully_Conv_CNN.py) |
51
| 7     | One Shot Learning with Siamese Networks (CNNs Backbone) <br> [[Paper]](https://www.cs.cmu.edu/~rsalakhu/papers/oneshot1.pdf) [[Tutorial]](https://towardsdatascience.com/one-shot-learning-with-siamese-networks-using-keras-17f34e75bb3d) | [Siamese Networks](https://github.com/SuperBruceJia/EEG-DL/blob/master/Models/Network/Siamese_Network.py) |
52
| 8     | Graph Convolutional Neural Networks <br> [[Paper]](https://ieeexplore.ieee.org/document/9889159) [[Presentation]](https://shuyuej.com/files/Presentation/A_Summary_Three_Projects.pdf) | [GCN / Graph CNN](https://github.com/SuperBruceJia/EEG-DL/blob/master/Models/Network/lib_for_GCN/GCN_Model.py) |
53
| 9    | Deep Residual Graph Convolutional Neural Networks [[Paper]](https://arxiv.org/abs/2007.13484) | [ResGCN](https://github.com/SuperBruceJia/EEG-DL/blob/master/Models/Network/lib_for_GCN/ResGCN_Model.py) | 
54
| 10    | Densely Connected Graph Convolutional Neural Networks  | [DenseGCN](https://github.com/SuperBruceJia/EEG-DL/blob/master/Models/Network/lib_for_GCN/DenseGCN_Model.py) |
55
| 11    | Bayesian Convolutional Neural Network <br> via Variational Inference [[Paper]](https://arxiv.org/abs/1901.02731) | [Bayesian CNNs](https://github.com/SuperBruceJia/EEG-BayesianCNN) |
56
| 12    | Recurrent Neural Networks [[Paper]](https://www.frontiersin.org/articles/10.3389/fbioe.2021.706229/full) | [RNN](https://github.com/SuperBruceJia/EEG-DL/blob/master/Models/Network/RNN.py) |
57
| 13    | Attention-based Recurrent Neural Networks [[Paper]](https://www.frontiersin.org/articles/10.3389/fbioe.2021.706229/full) | [RNN with Attention](https://github.com/SuperBruceJia/EEG-DL/blob/master/Models/Network/RNN_with_Attention.py) |
58
| 14    | Bidirectional Recurrent Neural Networks [[Paper]](https://www.frontiersin.org/articles/10.3389/fbioe.2021.706229/full) | [BiRNN](https://github.com/SuperBruceJia/EEG-DL/blob/master/Models/Network/BiRNN.py) |
59
| 15    | Attention-based Bidirectional Recurrent Neural Networks [[Paper]](https://www.frontiersin.org/articles/10.3389/fbioe.2021.706229/full) | [BiRNN with Attention](https://github.com/SuperBruceJia/EEG-DL/blob/master/Models/Network/BiRNN_with_Attention.py) |
60
| 16    | Long-short Term Memory [[Paper]](https://www.frontiersin.org/articles/10.3389/fbioe.2021.706229/full) | [LSTM](https://github.com/SuperBruceJia/EEG-DL/blob/master/Models/Network/LSTM.py) |
61
| 17    | Attention-based Long-short Term Memory [[Paper]](https://www.frontiersin.org/articles/10.3389/fbioe.2021.706229/full) | [LSTM with Attention](https://github.com/SuperBruceJia/EEG-DL/blob/master/Models/Network/LSTM_with_Attention.py) |
62
| 18    | Bidirectional Long-short Term Memory [[Paper]](https://www.frontiersin.org/articles/10.3389/fbioe.2021.706229/full) | [BiLSTM](https://github.com/SuperBruceJia/EEG-DL/blob/master/Models/Network/BiLSTM.py) |
63
| 19    | Attention-based Bidirectional Long-short Term Memory [[Paper]](https://www.frontiersin.org/articles/10.3389/fbioe.2021.706229/full) | [BiLSTM with Attention](https://github.com/SuperBruceJia/EEG-DL/blob/master/Models/Network/BiLSTM_with_Attention.py) |
64
| 20    | Gated Recurrent Unit [[Paper]](https://www.frontiersin.org/articles/10.3389/fbioe.2021.706229/full) | [GRU](https://github.com/SuperBruceJia/EEG-DL/blob/master/Models/Network/GRU.py) |
65
| 21    | Attention-based Gated Recurrent Unit [[Paper]](https://www.frontiersin.org/articles/10.3389/fbioe.2021.706229/full) | [GRU with Attention](https://github.com/SuperBruceJia/EEG-DL/blob/master/Models/Network/GRU_with_Attention.py) |
66
| 22    | Bidirectional Gated Recurrent Unit [[Paper]](https://www.frontiersin.org/articles/10.3389/fbioe.2021.706229/full) | [BiGRU](https://github.com/SuperBruceJia/EEG-DL/blob/master/Models/Network/BiGRU.py) |
67
| 23    | Attention-based Bidirectional Gated Recurrent Unit [[Paper]](https://www.frontiersin.org/articles/10.3389/fbioe.2021.706229/full) | [BiGRU with Attention](https://github.com/SuperBruceJia/EEG-DL/blob/master/Models/Network/BiGRU_with_Attention.py) |
68
| 24    | Attention-based BiLSTM + GCN [[Paper]](https://www.frontiersin.org/articles/10.3389/fbioe.2021.706229/full) | [Attention-based BiLSTM](https://github.com/SuperBruceJia/EEG-DL/blob/master/Models/Network/BiLSTM_with_Attention.py) <br> [GCN](https://github.com/SuperBruceJia/EEG-DL/blob/master/Models/Network/lib_for_GCN/GCN_Model.py) |
69
| 25    | Transformer [[Paper]](https://arxiv.org/abs/1706.03762) [[Paper]](https://arxiv.org/abs/2010.11929) | [Transformer](https://github.com/SuperBruceJia/EEG-DL/blob/master/Models/main-Transformer.py) |
70
| 26    | Transfer Learning with Transformer <br> (**This code is only for reference!**) <br> (**You can modify the codes to fit your data.**) | Stage 1: [Pre-training](https://github.com/SuperBruceJia/EEG-DL/blob/master/Models/main-pretrain_model.py) <br> Stage 2: [Fine Tuning](https://github.com/SuperBruceJia/EEG-DL/blob/master/Models/main-finetuning_model.py) |
71
72
**One EEG Motor Imagery (MI) benchmark** is currently supported. Other benchmarks in the field of EEG or BCI can be found [here](https://github.com/meagmohit/EEG-Datasets).
73
74
| No.     | Dataset                                                                          | Tutorial |
75
| :----:  | :----:                                                                           | :----:   |
76
| 1       | [EEG Motor Movement/Imagery Dataset](https://archive.physionet.org/pn4/eegmmidb/) | [Tutorial](https://github.com/SuperBruceJia/EEG-Motor-Imagery-Classification-CNNs-TensorFlow)|
77
78
**The evaluation criteria** consists of
79
80
| Evaluation Metrics                                                        | Tutorial |
81
| :----:                                                                    | :----:   |
82
| Confusion Matrix | [Tutorial](https://towardsdatascience.com/understanding-confusion-matrix-a9ad42dcfd62) |
83
| Accuracy / Precision / Recall / F1 Score / Kappa Coefficient | [Tutorial](https://towardsdatascience.com/understanding-confusion-matrix-a9ad42dcfd62) |
84
| Receiver Operating Characteristic (ROC) Curve / Area under the Curve (AUC)| - |
85
| Paired-wise t-test via R language | [Tutorial](https://www.analyticsvidhya.com/blog/2019/05/statistics-t-test-introduction-r-implementation/) |
86
87
*The evaluation metrics are mainly supported for **four-class classification**. If you wish to switch to two-class or three-class classification, please modify [this file](https://github.com/SuperBruceJia/EEG-DL/blob/master/Models/Evaluation_Metrics/Metrics.py) to adapt to your personal Dataset classes. Meanwhile, the details about the evaluation metrics can be found in [this paper](https://iopscience.iop.org/article/10.1088/1741-2552/ab4af6/meta).*
88
89
## Usage Demo
90
91
1. ***(Under Any Python Environment)*** Download the [EEG Motor Movement/Imagery Dataset](https://archive.physionet.org/pn4/eegmmidb/) via [this script](https://github.com/SuperBruceJia/EEG-DL/blob/master/Download_Raw_EEG_Data/MIND_Get_EDF.py).
92
93
    ```text
94
    $ python MIND_Get_EDF.py
95
    ```
96
97
2. ***(Under Python 2.7 Environment)*** Read the .edf files (One of the raw EEG signals formats) and save them into Matlab .m files via [this script](https://github.com/SuperBruceJia/EEG-DL/blob/master/Download_Raw_EEG_Data/Extract-Raw-Data-Into-Matlab-Files.py). FYI, this script must be executed under the **Python 2 environment (Python 2.7 is recommended)** due to some Python 2 syntax. If using Python 3 environment to run the file, there might be no error, but the labels of EEG tasks would be totally messed up.
98
99
    ```text
100
    $ python Extract-Raw-Data-Into-Matlab-Files.py
101
    ```
102
103
3. Preprocessed the Dataset via the Matlab and save the data into the Excel files (training_set, training_label, test_set, and test_label) via [these scripts](https://github.com/SuperBruceJia/EEG-DL/tree/master/Preprocess_EEG_Data) with regards to different models. FYI, every lines of the Excel file is a sample, and the columns can be regarded as features, e.g., 4096 columns mean 64 channels X 64 time points. Later, the models will reshape 4096 columns into a Matrix with the shape 64 channels X 64 time points. You should can change the number of columns to fit your own needs, e.g., the real dimension of your own Dataset.
104
105
4. ***(Prerequsites)*** Train and test deep learning models **under the Python 3.6 Environment (Highly Recommended)** for EEG signals / tasks classification via [the EEG-DL library](https://github.com/SuperBruceJia/EEG-DL/tree/master/Models), which provides multiple SOTA DL models.
106
107
    ```text
108
    Python Version: Python 3.6 (Recommended)
109
    TensorFlow Version: TensorFlow 1.13.1
110
    ```
111
112
    Use the below command to install TensorFlow GPU Version 1.13.1:
113
114
    ```python
115
    $ pip install --upgrade --force-reinstall tensorflow-gpu==1.13.1 --user
116
    ```
117
118
5. Read evaluation criterias (through iterations) via the [Tensorboard](https://www.tensorflow.org/tensorboard). You can follow [this tutorial](https://www.guru99.com/tensorboard-tutorial.html). When you finished training the model, you will find the "events.out.tfevents.***" in the folder, e.g., "/Users/shuyuej/Desktop/trained_model/". You can use the following command in your terminal:
119
120
    ```python
121
    $ tensorboard --logdir="/Users/shuyuej/Desktop/trained_model/" --host=127.0.0.1
122
    ```
123
124
    You can open the website in the [Google Chrome](https://www.google.com/chrome/) (Highly Recommended). 
125
    
126
    ```html
127
    http://127.0.0.1:6006/
128
    ```
129
130
    Then you can read and save the criterias into Excel .csv files.
131
132
6. Finally, draw beautiful paper photograph using Matlab or Python. Please follow [these scripts](https://github.com/SuperBruceJia/EEG-DL/tree/master/Draw_Photos).
133
134
## Notice
135
1. I have tested all the files (Python and Matlab) under the macOS. Be advised that for some Matlab files, several Matlab functions are different between Windows Operating System (OS) and macOS. For example, I used "readmatrix" function to read CSV files in the MacOS. However, I have to use “csvread” function in the Windows because there was no such "readmatrix" Matlab function in the Windows. If you have met similar problems, I recommend you to Google or Baidu them. You can definitely work them out.
136
137
2. For the GCNs-Net (GCN Model), for the graph Convolutional layer, the dimensionality of the graph will be unchanged, and for the max-pooling layer, the dimensionality of the graph will be reduced by 2. That means, if you have N X N graph Laplacian, after the max-pooling layer, the dimension will be N/2 X N/2. If you have a 15-channel EEG system, it cannot use max-pooling unless you selected 14 --> 7 or 12 --> 6 --> 3 or 10 --> 5 or 8 --> 4 --> 2 --> 1, etc. The details can be reviewed from [this paper](https://arxiv.org/abs/2006.08924).
138
139
3. The **Loss Function** can be changed or modified from [this file](https://github.com/SuperBruceJia/EEG-DL/blob/master/Models/Loss_Function/Loss.py).
140
141
4. The **Dataset Loader** can be changed or modified from [this file](https://github.com/SuperBruceJia/EEG-DL/blob/master/Models/DatasetAPI/DataLoader.py).
142
143
## Research Ideas
144
1. Dynamic Graph Convolutional Neural Networks [[Paper Survey]](https://shuyuej.com/files/EEG/Dynamic-GCN-Survey.pdf) [[Paper Reading]](https://github.com/SuperBruceJia/paper-reading/tree/master/Graph-Neural-Network/Dynamic-GCN-Papers)
145
146
2. Neural Architecture Search / AutoML (Automatic Machine Learning) [[Tsinghua AutoGraph]](https://github.com/THUMNLab/AutoGL)
147
148
3. Reinforcement Learning Algorithms (_e.g._, Deep Q-Learning) [[Tsinghua Tianshou]](https://github.com/thu-ml/tianshou) [[Doc for Chinese Readers]](https://tianshou.readthedocs.io/zh/latest/docs/toc.html)
149
150
4. Bayesian Convolutional Neural Networks [[Paper]](https://arxiv.org/abs/1901.02731) [[Thesis]](https://github.com/kumar-shridhar/Master-Thesis-BayesianCNN/raw/master/thesis.pdf) [[Codes]](https://github.com/SuperBruceJia/EEG-BayesianCNN)
151
152
5. Transformer / Self-attention / Non-local Modeling [[Paper Collections]](https://github.com/SuperBruceJia/paper-reading/tree/master/Machine-Learning/Transformer) [[Transformer Codes]](https://github.com/SuperBruceJia/EEG-DL/blob/master/Models/main-Transformer.py) [[Non-local Modeling PyTorch Codes]](https://github.com/SuperBruceJia/NLNet-IQA)
153
154
    [[Why Non-local Modeling?]](https://github.com/SuperBruceJia/NLNet-IQA) [[Paper]](https://shuyuej.com/files/MMSP/MMSP22_Paper.pdf) [[A Detailed Presentation]](https://shuyuej.com/files/Presentation/A_Summary_Three_Projects.pdf) [[Slides]](https://shuyuej.com/files/MMSP/MMSP22_Slides.pdf) [[Poster]](https://shuyuej.com/files/MMSP/MMSP22_Poster.pdf)
155
    
156
    [[Why Transformer?]](https://github.com/SuperBruceJia/paper-reading/blob/master/Transformer/Swin%20Transformer%20and%205%20Reasons%20to%20Use%20Transformer:Attention%20in%20Computer%20Vision.pdf)
157
    
158
    [[Transformer and Attention Mechanism Introduction]](https://github.com/SuperBruceJia/paper-reading/blob/master/Transformer/Towards%20Universal%20Models%20with%20NLP%20for%20Computer%20Vision.pdf) 
159
        
160
    [[视觉Transformer年度进展评述 (in Chinese)]](https://github.com/SuperBruceJia/paper-reading/raw/master/Transformer/%E8%A7%86%E8%A7%89Transformer%20%E5%B9%B4%E5%BA%A6%E8%BF%9B%E5%B1%95%E8%AF%84%E8%BF%B0.pdf) 
161
162
6. Self-supervised Learning + Transformer [[Presentation]](https://github.com/SuperBruceJia/paper-reading/raw/master/Transformer/Self-Supervised%20Learning%20in%20Computer%20Vision-%20Past%2C%20Present%2C%20Trends.pdf)
163
164
## Common Issues
165
1. **ValueError: Cannot feed value of shape (1024, 1) for Tensor 'input/label:0', which has shape '(1024,)'**
166
167
    To solve this issue, you have to squeeze the shape of the labels from (1024, 1) to (1024,) using np.squeeze. Please edit the [DataLoader.py file](https://github.com/SuperBruceJia/EEG-DL/blob/master/Models/DatasetAPI/DataLoader.py).
168
    From original codes:
169
    ```python
170
    train_labels = pd.read_csv(DIR + 'training_label.csv', header=None)
171
    train_labels = np.array(train_labels).astype('float32')
172
173
    test_labels = pd.read_csv(DIR + 'test_label.csv', header=None)
174
    test_labels = np.array(test_labels).astype('float32')
175
    ```
176
    to
177
    ```python
178
    train_labels = pd.read_csv(DIR + 'training_label.csv', header=None)
179
    train_labels = np.array(train_labels).astype('float32')
180
    train_labels = np.squeeze(train_labels)
181
182
    test_labels = pd.read_csv(DIR + 'test_label.csv', header=None)
183
    test_labels = np.array(test_labels).astype('float32')
184
    test_labels = np.squeeze(test_labels)
185
    ```
186
187
2. **InvalidArgumentError: Nan in summary histogram for training/logits/bias/gradients**
188
    
189
    To solve this issue, you have to comment all the histogram summary. Please edit the [GCN_Model.py file](https://github.com/SuperBruceJia/EEG-DL/blob/master/Models/Network/lib_for_GCN/GCN_Model.py).
190
191
    ```python
192
    # Comment the above tf.summary.histogram from the GCN_Model.py File
193
194
    # # Histograms.
195
    # for grad, var in grads:
196
    #     if grad is None:
197
    #         print('warning: {} has no gradient'.format(var.op.name))
198
    #     else:
199
    #         tf.summary.histogram(var.op.name + '/gradients', grad)
200
201
    def _weight_variable(self, shape, regularization=True):
202
        initial = tf.truncated_normal_initializer(0, 0.1)
203
        var = tf.get_variable('weights', shape, tf.float32, initializer=initial)
204
        if regularization:
205
            self.regularizers.append(tf.nn.l2_loss(var))
206
        # tf.summary.histogram(var.op.name, var)
207
        return var
208
209
    def _bias_variable(self, shape, regularization=True):
210
        initial = tf.constant_initializer(0.1)
211
        var = tf.get_variable('bias', shape, tf.float32, initializer=initial)
212
        if regularization:
213
            self.regularizers.append(tf.nn.l2_loss(var))
214
        # tf.summary.histogram(var.op.name, var)
215
        return var
216
    ```
217
218
3. **TypeError: len() of unsized object**
219
    
220
    To solve this issue, you have to change the coarsen level to your own needs, and you can definitely change it to see the difference. Please edit the [main-GCN.py file](https://github.com/SuperBruceJia/EEG-DL/blob/master/Models/main-GCN.py). For example, if you want to implement the GCNs-Net to a 10-channel EEG system, you have to set "levels" equal to 1 or 0 because there is at most only one max-pooling (10 --> 5). And you can change argument "level" to 1 or 0 to see the difference.
221
222
    ```python
223
    # This is the coarsen levels, you can definitely change the level to observe the difference
224
    graphs, perm = coarsening.coarsen(Adjacency_Matrix, levels=5, self_connections=False)
225
    ```
226
    to
227
    ```python
228
    # This is the coarsen levels, you can definitely change the level to observe the difference
229
    graphs, perm = coarsening.coarsen(Adjacency_Matrix, levels=1, self_connections=False)
230
    ```
231
232
4. **tensorflow.python.framework.errors_impl.InvalidArgumentError: Received a label value of 7 which is outside the valid range of [0, 7).  Label values: 5 2 3 3 1 5 5 4 7 4 2 2 1 7 5 6 3 4 2 4**
233
    
234
    To solve this issue, for the GCNs-Net, when you make your dataset, you have to make your labels from 0 rather than 1. For example, if you have seven classes, your labels should be 0 (First class), 1 (Second class), 2 (Third class), 3 (Fourth class), 4 (Fifth class), 5 (Sixth class), 6 (Seventh class) instead of 1, 2, 3, 4, 5, 6, 7.
235
236
5. **IndexError: list index out of range**
237
    
238
    To solve this issue, first of all, please double-check your Python Environment. **Python 2.7 Environment is required.** Besides, please install ***0.1.11*** version of ***pyEDFlib***. The installation instruction is as follows:
239
    
240
    ```python
241
    $ pip install pyEDFlib==0.1.11
242
    ```
243
244
## Structure of the Code
245
246
At the root of the project, you will see:
247
248
```text
249
├── Download_Raw_EEG_Data
250
│   ├── Extract-Raw-Data-Into-Matlab-Files.py
251
│   ├── MIND_Get_EDF.py
252
│   ├── README.md
253
│   └── electrode_positions.txt
254
├── Draw_Photos
255
│   ├── Draw_Accuracy_Photo.m
256
│   ├── Draw_Box_Photo.m
257
│   ├── Draw_Confusion_Matrix.py
258
│   ├── Draw_Loss_Photo.m
259
│   ├── Draw_ROC_and_AUC.py
260
│   └── figure_boxplot.m
261
├── LICENSE
262
├── Logo.png
263
├── MANIFEST.in
264
├── Models
265
│   ├── DatasetAPI
266
│   │   └── DataLoader.py
267
│   ├── Evaluation_Metrics
268
│   │   └── Metrics.py
269
│   ├── Initialize_Variables
270
│   │   └── Initialize.py
271
│   ├── Loss_Function
272
│   │   └── Loss.py
273
│   ├── Network
274
│   │   ├── BiGRU.py
275
│   │   ├── BiGRU_with_Attention.py
276
│   │   ├── BiLSTM.py
277
│   │   ├── BiLSTM_with_Attention.py
278
│   │   ├── BiRNN.py
279
│   │   ├── BiRNN_with_Attention.py
280
│   │   ├── CNN.py
281
│   │   ├── DNN.py
282
│   │   ├── DenseCNN.py
283
│   │   ├── Fully_Conv_CNN.py
284
│   │   ├── GRU.py
285
│   │   ├── GRU_with_Attention.py
286
│   │   ├── LSTM.py
287
│   │   ├── LSTM_with_Attention.py
288
│   │   ├── RNN.py
289
│   │   ├── RNN_with_Attention.py
290
│   │   ├── ResCNN.py
291
│   │   ├── Siamese_Network.py
292
│   │   ├── Thin_ResNet.py
293
│   │   └── lib_for_GCN
294
│   │       ├── DenseGCN_Model.py
295
│   │       ├── GCN_Model.py
296
│   │       ├── ResGCN_Model.py
297
│   │       ├── coarsening.py
298
│   │       └── graph.py
299
│   ├── __init__.py
300
│   ├── main-BiGRU-with-Attention.py
301
│   ├── main-BiGRU.py
302
│   ├── main-BiLSTM-with-Attention.py
303
│   ├── main-BiLSTM.py
304
│   ├── main-BiRNN-with-Attention.py
305
│   ├── main-BiRNN.py
306
│   ├── main-CNN.py
307
│   ├── main-DNN.py
308
│   ├── main-DenseCNN.py
309
│   ├── main-DenseGCN.py
310
│   ├── main-FullyConvCNN.py
311
│   ├── main-GCN.py
312
│   ├── main-GRU-with-Attention.py
313
│   ├── main-GRU.py
314
│   ├── main-LSTM-with-Attention.py
315
│   ├── main-LSTM.py
316
│   ├── main-RNN-with-Attention.py
317
│   ├── main-RNN.py
318
│   ├── main-ResCNN.py
319
│   ├── main-ResGCN.py
320
│   ├── main-Siamese-Network.py
321
│   └── main-Thin-ResNet.py
322
├── NEEPU.png
323
├── Preprocess_EEG_Data
324
│   ├── For-CNN-based-Models
325
│   │   └── make_dataset.m
326
│   ├── For-DNN-based-Models
327
│   │   └── make_dataset.m
328
│   ├── For-GCN-based-Models
329
│   │   └── make_dataset.m
330
│   ├── For-RNN-based-Models
331
│   │   └── make_dataset.m
332
│   └── For-Siamese-Network-One-Shot-Learning
333
│       └── make_dataset.m
334
├── README.md
335
├── Saved_Files
336
│   └── README.md
337
├── requirements.txt
338
└── setup.py
339
```
340
341
## Citation
342
343
If you find our library useful, please considering citing our papers in your publications.
344
We provide a BibTeX entry below.
345
346
```bibtex
347
@article{hou2022gcn,
348
    title   = {{GCNs-Net}: A Graph Convolutional Neural Network Approach for Decoding Time-Resolved EEG Motor Imagery Signals},
349
        author  = {Hou, Yimin and Jia, Shuyue and Lun, Xiangmin and Hao, Ziqian and Shi, Yan and Li, Yang and Zeng, Rui and Lv, Jinglei},
350
    journal = {IEEE Transactions on Neural Networks and Learning Systems},
351
    volume  = {},
352
    number  = {},
353
    pages   = {1-12},
354
    year    = {Sept. 2022},
355
    doi     = {10.1109/TNNLS.2022.3202569}
356
}
357
  
358
@article{hou2020novel,
359
    title     = {A Novel Approach of Decoding EEG Four-class Motor Imagery Tasks via Scout {ESI} and {CNN}},
360
    author    = {Hou, Yimin and Zhou, Lu and Jia, Shuyue and Lun, Xiangmin},
361
    journal   = {Journal of Neural Engineering},
362
    volume    = {17},
363
    number    = {1},
364
    pages     = {016048},
365
    year      = {Feb. 2020},
366
    publisher = {IOP Publishing},
367
    doi       = {10.1088/1741-2552/ab4af6}
368
    
369
}
370
371
@article{hou2022deep,
372
    title   = {Deep Feature Mining via the Attention-Based Bidirectional Long Short Term Memory Graph Convolutional Neural Network for Human Motor Imagery Recognition},
373
    author  = {Hou, Yimin and Jia, Shuyue and Lun, Xiangmin and Zhang, Shu and Chen, Tao and Wang, Fang and Lv, Jinglei},   
374
    journal = {Frontiers in Bioengineering and Biotechnology},      
375
    volume  = {9},      
376
    year    = {Feb. 2022},      
377
    url     = {https://www.frontiersin.org/article/10.3389/fbioe.2021.706229},       
378
    doi     = {10.3389/fbioe.2021.706229},      
379
    ISSN    = {2296-4185}
380
}
381
382
@article{Jia2020AttentionGCN,
383
    title   = {Attention-based Graph {ResNet} for Motor Intent Detection from Raw EEG signals},
384
    author  = {Jia, Shuyue and Hou, Yimin and Lun, Xiangmin and Lv, Jinglei},
385
    journal = {arXiv preprint arXiv:2007.13484},
386
    year    = {2022}
387
}
388
```
389
390
Our papers can be downloaded from:
391
1. [A Novel Approach of Decoding EEG Four-class Motor Imagery Tasks via Scout ESI and CNN](https://iopscience.iop.org/article/10.1088/1741-2552/ab4af6/meta)<br>
392
*Codes and Tutorials for this work can be found [here](https://github.com/SuperBruceJia/EEG-Motor-Imagery-Classification-CNNs-TensorFlow).*<br>
393
394
**Overall Framework**:
395
396
<div>
397
    <div style="text-align:center">
398
    <img width=100%device-width src="https://user-images.githubusercontent.com/31528604/200832194-ea4198f4-e732-436c-bdec-6e454341c442.png" alt="Project1">
399
</div>
400
401
**Proposed CNNs Architecture**:
402
403
<div>
404
    <div style="text-align:center">
405
    <img width=60%device-width src="https://user-images.githubusercontent.com/31528604/200834151-647319e6-9f6c-428b-b763-36d8859acab9.png" alt="Project1">
406
</div>
407
408
--------------------------------------------------------------------------------
409
410
2. [GCNs-Net: A Graph Convolutional Neural Network Approach for Decoding Time-resolved EEG Motor Imagery Signals](https://ieeexplore.ieee.org/document/9889159)<br> 
411
***Slides Presentation** for this work can be found [here](https://shuyuej.com/files/EEG/GCNs-Net-Presentation.pdf).*<br>
412
413
<div>
414
    <div style="text-align:center">
415
    <img width=100%device-width src="https://github.com/SuperBruceJia/SuperBruceJia.github.io/raw/master/imgs/Picture2.png" alt="Project2">
416
</div>
417
418
--------------------------------------------------------------------------------
419
420
3. [Deep Feature Mining via Attention-based BiLSTM-GCN for Human Motor Imagery Recognition](https://www.frontiersin.org/articles/10.3389/fbioe.2021.706229/full)<br>
421
***Slides Presentation** for this work can be found [here](https://shuyuej.com/files/EEG/BiLSTM-GCN-Presentation.pdf).*<br>
422
423
<div>
424
    <div style="text-align:center">
425
    <img width=100%device-width src="https://user-images.githubusercontent.com/31528604/200833742-1b775246-7bb8-4add-a6f9-210f1c5249a0.JPEG" alt="Project3.1">
426
</div>
427
428
<div>
429
    <div style="text-align:center">
430
    <img width=100%device-width src="https://user-images.githubusercontent.com/31528604/200833795-157eba9e-0f1b-4f24-8038-8fb385fedcbd.JPEG" alt="Project4.1">
431
</div>
432
433
--------------------------------------------------------------------------------
434
435
4. [Attention-based Graph ResNet for Motor Intent Detection from Raw EEG signals](https://arxiv.org/abs/2007.13484)
436
437
## Other Useful Resources
438
439
I think the following presentations would be helpful when you guys get engaged with Python and TensorFlow.
440
441
1. Python Environment Setting-up Tutorial [download](https://github.com/SuperBruceJia/paper-reading/raw/master/other-presentations/Python-Environment-Set-up.pptx)
442
443
2. Usage of Cloud Server and Setting-up Tutorial [download](https://github.com/SuperBruceJia/paper-reading/raw/master/other-presentations/Usage%20of%20Server%20and%20Setting%20Up.pdf)
444
445
3. TensorFlow for Deep Learning Tutorial [download](https://github.com/SuperBruceJia/paper-reading/raw/master/other-presentations/TensorFlow-for-Deep-Learning.pdf)
446
447
## Contribution
448
449
We always welcome contributions to help make EEG-DL Library better. If you would like to contribute or have any questions, please don't hesitate to email me at shuyuej@ieee.org.
450
451
## Organizations
452
453
The library was created and open-sourced by Shuyue Jia, supervised by Prof. Yimin Hou, at the School of Automation Engineering, Northeast Electric Power University, Jilin, Jilin, China.<br>
454
<a href="http://www.neepu.edu.cn/"> <img width="500" height="150" src="https://github.com/SuperBruceJia/EEG-DL/raw/master/NEEPU.png"></a>