Diff of /README.md [000000] .. [365bd4]

Switch to unified view

a b/README.md
1
2
3
[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.4060151.svg)](https://doi.org/10.5281/zenodo.4060151)
4
5
6
# EEG_classification
7
Description of the approach : https://towardsdatascience.com/sleep-stage-classification-from-single-channel-eeg-using-convolutional-neural-networks-5c710d92d38e
8
9
10
Sleep Stage Classification from Single Channel EEG using Convolutional Neural
11
Networks
12
13
*****
14
15
<span class="figcaption_hack">Photo by [Paul
16
M](https://unsplash.com/photos/7i9yLoUgoP8?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText)
17
on
18
[Unsplash](https://unsplash.com/search/photos/owl?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText)</span>
19
20
Quality Sleep is an important part of a healthy lifestyle as lack of it can
21
cause a list of
22
[issues](https://www.webmd.com/sleep-disorders/features/10-results-sleep-loss#1)
23
like a higher risk of cancer and chronic fatigue. This means that having the
24
tools to automatically and easily monitor sleep can be powerful to help people
25
sleep better.<br> Doctors use a recording of a signal called EEG which measures
26
the electrical activity of the brain using an electrode to understand sleep
27
stages of a patient and make a diagnosis about the quality if their sleep.
28
29
In this post we will train a neural network to do the sleep stage classification
30
automatically from EEGs.
31
32
### **Data**
33
34
In our input we have a sequence of 30s epochs of EEG where each epoch has a
35
label [{“W”, “N1”, “N2”, “N3”,
36
“REM”}](https://en.wikipedia.org/wiki/Sleep_cycle).
37
38
<span class="figcaption_hack">Fig 1 : EEG Epoch</span>
39
40
<span class="figcaption_hack">Fig 2 : Sleep stages through the night</span>
41
42
This post is based on a publicly available EEG Sleep data (
43
[Sleep-EDF](https://www.physionet.org/physiobank/database/sleep-edfx/) ) that
44
was done on 20 subject, 19 of which have 2 full nights of sleep. We use the
45
pre-processing scripts available in this
46
[repo](https://github.com/akaraspt/deepsleepnet) and split the train/test so
47
that no study subject is in both at the same time.
48
49
The general objective is to go from a 1D sequence like in fig 1 and predict the
50
output hypnogram like in fig 2.
51
52
### Model Description
53
54
Recent approaches [[1]](https://arxiv.org/pdf/1703.04046.pdf) use a sub-model
55
that encodes each epoch into a 1D vector of fixed size and then a second
56
sequential sub-model that maps each epoch’s vector into a class from [{“W”,
57
“N1”, “N2”, “N3”, “REM”}](https://en.wikipedia.org/wiki/Sleep_cycle).
58
59
Here we use a 1D CNN to encode each Epoch and then another 1D CNN or LSTM that
60
labels the sequence of epochs to create the final
61
[hypnogram](https://en.wikipedia.org/wiki/Hypnogram). This allows the prediction
62
for an epoch to take into account the context.
63
64
<span class="figcaption_hack">Sub-model 1 : Epoch encoder</span>
65
66
<span class="figcaption_hack">Sub-model 2 : Sequential model for epoch classification</span>
67
68
The full model takes as input the sequence of EEG epochs ( 30 seconds each)
69
where the sub-model 1 is applied to each epoch using the TimeDistributed Layer
70
of [Keras](https://keras.io/) which produces a sequence of vectors. The sequence
71
of vectors is then fed into a another sub-model like an LSTM or a CNN that
72
produces the sequence of output labels.<br> We also use a linear Chain
73
[CRF](https://en.wikipedia.org/wiki/Conditional_random_field) for one of the
74
models and show that it can improve the performance.
75
76
### Training Procedure
77
78
The full model is trained end-to-end from scratch using Adam optimizer with an
79
initial learning rate of 1e⁻³ that is reduced each time the validation accuracy
80
plateaus using the ReduceLROnPlateau Keras Callbacks.
81
82
<span class="figcaption_hack">Accuracy Training curves</span>
83
84
### Results
85
86
We compare 3 different models :
87
88
* CNN-CNN : This ones used a 1D CNN for the epoch encoding and then another 1D CNN
89
for the sequence labeling.
90
* CNN-CNN-CRF : This model used a 1D CNN for the epoch encoding and then a 1D
91
CNN-CRF for the sequence labeling.
92
* CNN-LSTM : This ones used a 1D CNN for the epoch encoding and then an LSTM for
93
the sequence labeling.
94
95
We evaluate each model on an independent test set and get the following results
96
:
97
98
* CNN-CNN : F1 = 0.81, ACCURACY = 0.87
99
* CNN-CNN-CRF : F1 = 0.82, ACCURACY =0.89
100
* CNN-LSTM : F1 = 0.71, ACCURACY = 0.76
101
102
The CNN-CNN-CRF outperforms the two other models because the CRF helps learn the
103
transition probabilities between classes. The LSTM based model does not work as
104
well because it is most sensitive to hyper-parameters like the optimizer and the
105
batch size and requires extensive tuning to perform well.
106
107
<span class="figcaption_hack">Ground Truth Hypnogram</span>
108
109
<span class="figcaption_hack">Predicted Hypnogram using CNN-CNN-CRF</span>
110
111
Source code available here :
112
[https://github.com/CVxTz/EEG_classification](https://github.com/CVxTz/EEG_classification)
113
114
I look forward to your suggestions and feedback.
115
116
[[1] DeepSleepNet: a Model for Automatic Sleep Stage Scoring based on Raw
117
Single-Channel EEG](https://arxiv.org/pdf/1703.04046.pdf)
118
119
How to cite:
120
```
121
@software{mansar_youness_2020_4060151,
122
  author       = {Mansar Youness},
123
  title        = {CVxTz/EEG\_classification: v1.0},
124
  month        = sep,
125
  year         = 2020,
126
  publisher    = {Zenodo},
127
  version      = {v1.0},
128
  doi          = {10.5281/zenodo.4060151},
129
  url          = {https://doi.org/10.5281/zenodo.4060151}
130
}
131
```