|
a |
|
b/pulse_rate_starter.ipynb |
|
|
1 |
{ |
|
|
2 |
"cells": [ |
|
|
3 |
{ |
|
|
4 |
"cell_type": "markdown", |
|
|
5 |
"metadata": {}, |
|
|
6 |
"source": [ |
|
|
7 |
"## Part 1: Pulse Rate Algorithm\n", |
|
|
8 |
"\n", |
|
|
9 |
"### Contents\n", |
|
|
10 |
"Fill out this notebook as part of your final project submission.\n", |
|
|
11 |
"\n", |
|
|
12 |
"**You will have to complete both the Code and Project Write-up sections.**\n", |
|
|
13 |
"- The [Code](#Code) is where you will write a **pulse rate algorithm** and already includes the starter code.\n", |
|
|
14 |
" - Imports - These are the imports needed for Part 1 of the final project. \n", |
|
|
15 |
" - [glob](https://docs.python.org/3/library/glob.html)\n", |
|
|
16 |
" - [numpy](https://numpy.org/)\n", |
|
|
17 |
" - [scipy](https://www.scipy.org/)\n", |
|
|
18 |
"- The [Project Write-up](#Project-Write-up) to describe why you wrote the algorithm for the specific case.\n", |
|
|
19 |
"\n", |
|
|
20 |
"\n", |
|
|
21 |
"### Dataset\n", |
|
|
22 |
"You will be using the **Troika**[1] dataset to build your algorithm. Find the dataset under `datasets/troika/training_data`. The `README` in that folder will tell you how to interpret the data. The starter code contains a function to help load these files.\n", |
|
|
23 |
"\n", |
|
|
24 |
"1. Zhilin Zhang, Zhouyue Pi, Benyuan Liu, ‘‘TROIKA: A General Framework for Heart Rate Monitoring Using Wrist-Type Photoplethysmographic Signals During Intensive Physical Exercise,’’IEEE Trans. on Biomedical Engineering, vol. 62, no. 2, pp. 522-531, February 2015. Link\n", |
|
|
25 |
"\n", |
|
|
26 |
"-----" |
|
|
27 |
] |
|
|
28 |
}, |
|
|
29 |
{ |
|
|
30 |
"cell_type": "markdown", |
|
|
31 |
"metadata": {}, |
|
|
32 |
"source": [ |
|
|
33 |
"### Code" |
|
|
34 |
] |
|
|
35 |
}, |
|
|
36 |
{ |
|
|
37 |
"cell_type": "code", |
|
|
38 |
"execution_count": 41, |
|
|
39 |
"metadata": {}, |
|
|
40 |
"outputs": [], |
|
|
41 |
"source": [ |
|
|
42 |
"import glob\n", |
|
|
43 |
"from tqdm import tqdm\n", |
|
|
44 |
"import numpy as np\n", |
|
|
45 |
"import scipy as sp\n", |
|
|
46 |
"import scipy.io\n", |
|
|
47 |
"import osPre\n", |
|
|
48 |
"import scipy.signal\n", |
|
|
49 |
"import os.path\n", |
|
|
50 |
"from tqdm import tqdm\n", |
|
|
51 |
"from sklearn.model_selection import train_test_split\n", |
|
|
52 |
"from sklearn.linear_model import LinearRegression\n", |
|
|
53 |
"from sklearn.model_selection import KFold, LeaveOneGroupOut\n", |
|
|
54 |
"from sklearn.metrics import mean_squared_error\n", |
|
|
55 |
"def LoadTroikaDataset():\n", |
|
|
56 |
" \"\"\"\n", |
|
|
57 |
" Retrieve the .mat filenames for the troika dataset.\n", |
|
|
58 |
"\n", |
|
|
59 |
" Review the README in ./datasets/troika/ to understand the organization of the .mat files.\n", |
|
|
60 |
"\n", |
|
|
61 |
" Returns:\n", |
|
|
62 |
" data_fls: Names of the .mat files that contain signal data\n", |
|
|
63 |
" ref_fls: Names of the .mat files that contain reference data\n", |
|
|
64 |
" <data_fls> and <ref_fls> are ordered correspondingly, so that ref_fls[5] is the \n", |
|
|
65 |
" reference data for data_fls[5], etc...\n", |
|
|
66 |
" \"\"\"\n", |
|
|
67 |
" data_dir = \"./datasets/troika/training_data\"\n", |
|
|
68 |
" data_fls = sorted(glob.glob(data_dir + \"/DATA_*.mat\"))\n", |
|
|
69 |
" ref_fls = sorted(glob.glob(data_dir + \"/REF_*.mat\"))\n", |
|
|
70 |
" return data_fls, ref_fls\n", |
|
|
71 |
"\n", |
|
|
72 |
"def LoadTroikaDataFile(data_fl):\n", |
|
|
73 |
" \"\"\"\n", |
|
|
74 |
" Loads and extracts signals from a troika data file.\n", |
|
|
75 |
"\n", |
|
|
76 |
" Usage:\n", |
|
|
77 |
" data_fls, ref_fls = LoadTroikaDataset()\n", |
|
|
78 |
" ppg, accx, accy, accz = LoadTroikaDataFile(data_fls[0])\n", |
|
|
79 |
"\n", |
|
|
80 |
" Args:\n", |
|
|
81 |
" data_fl: (str) filepath to a troika .mat file.\n", |
|
|
82 |
"\n", |
|
|
83 |
" Returns:\n", |
|
|
84 |
" numpy arrays for ppg, accx, accy, accz signals.\n", |
|
|
85 |
" \"\"\"\n", |
|
|
86 |
" data = sp.io.loadmat(data_fl)['sig']\n", |
|
|
87 |
" return data[2:]\n", |
|
|
88 |
"\n", |
|
|
89 |
"\n", |
|
|
90 |
"def AggregateErrorMetric(pr_errors, confidence_est):\n", |
|
|
91 |
" \"\"\"\n", |
|
|
92 |
" Computes an aggregate error metric based on confidence estimates.\n", |
|
|
93 |
"\n", |
|
|
94 |
" Computes the MAE at 90% availability. \n", |
|
|
95 |
"\n", |
|
|
96 |
" Args:\n", |
|
|
97 |
" pr_errors: a numpy array of errors between pulse rate estimates and corresponding \n", |
|
|
98 |
" reference heart rates.\n", |
|
|
99 |
" confidence_est: a numpy array of confidence estimates for each pulse rate\n", |
|
|
100 |
" error.\n", |
|
|
101 |
"\n", |
|
|
102 |
" Returns:\n", |
|
|
103 |
" the MAE at 90% availability\n", |
|
|
104 |
" \"\"\"\n", |
|
|
105 |
" # Higher confidence means a better estimate. The best 90% of the estimates\n", |
|
|
106 |
" # are above the 10th percentile confidence.\n", |
|
|
107 |
" percentile90_confidence = np.percentile(confidence_est, 10)\n", |
|
|
108 |
"\n", |
|
|
109 |
" # Find the errors of the best pulse rate estimates\n", |
|
|
110 |
" best_estimates = pr_errors[confidence_est >= percentile90_confidence]\n", |
|
|
111 |
"\n", |
|
|
112 |
" # Return the mean absolute error\n", |
|
|
113 |
" return np.mean(np.abs(best_estimates))\n", |
|
|
114 |
"\n", |
|
|
115 |
"def Evaluate():\n", |
|
|
116 |
" \"\"\"\n", |
|
|
117 |
" Top-level function evaluation function.\n", |
|
|
118 |
"\n", |
|
|
119 |
" Runs the pulse rate algorithm on the Troika dataset and returns an aggregate error metric.\n", |
|
|
120 |
"\n", |
|
|
121 |
" Returns:\n", |
|
|
122 |
" Pulse rate error on the Troika dataset. See AggregateErrorMetric.\n", |
|
|
123 |
" \"\"\"\n", |
|
|
124 |
" # Retrieve dataset files\n", |
|
|
125 |
" data_fls, ref_fls = LoadTroikaDataset()\n", |
|
|
126 |
" errs, conFs = [], []\n", |
|
|
127 |
" for data_fl, ref_fl in zip(data_fls, ref_fls):\n", |
|
|
128 |
" # Run the pulse rate algorithm on each trial in the dataset\n", |
|
|
129 |
" errors, confidence = RunPulseRateAlgorithm(data_fl, ref_fl)\n", |
|
|
130 |
" errs.append(errors)\n", |
|
|
131 |
" conFs.append(confidence)\n", |
|
|
132 |
" # Compute aggregate error metric\n", |
|
|
133 |
" errs = np.hstack(errs)\n", |
|
|
134 |
" conFs = np.hstack(conFs)\n", |
|
|
135 |
" return AggregateErrorMetric(errs, conFs)\n", |
|
|
136 |
"\n", |
|
|
137 |
"def RunPulseRateAlgorithm(data_fl, ref_fl):\n", |
|
|
138 |
" \n", |
|
|
139 |
" \"\"\"\n", |
|
|
140 |
" Run the algorithm \n", |
|
|
141 |
" \n", |
|
|
142 |
" Args: \n", |
|
|
143 |
" data_fls: Names of the .mat files that contain signal data\n", |
|
|
144 |
" ref_fls: Names of the .mat files that contain reference data\n", |
|
|
145 |
" \n", |
|
|
146 |
"\n", |
|
|
147 |
" Returns:\n", |
|
|
148 |
" np.array(error): Array with error for predictions\n", |
|
|
149 |
" np.array(confidence): Array with confidence for predictions\n", |
|
|
150 |
" \"\"\"\n", |
|
|
151 |
" \n", |
|
|
152 |
" Fs = 125 # Sample Frequency\n", |
|
|
153 |
" window_len = 8 # Window to calculate PR\n", |
|
|
154 |
" window_shift = 2 # Difference between windows \n", |
|
|
155 |
" \n", |
|
|
156 |
" reg, scores = Regressor()\n", |
|
|
157 |
" targets, features, sigs, subs = Data_window8(data_fl, ref_fl)\n", |
|
|
158 |
" error, confidence = [], []\n", |
|
|
159 |
" for i,feature in enumerate(features):\n", |
|
|
160 |
" est = reg.predict(np.reshape(feature, (1, -1)))[0]\n", |
|
|
161 |
" \n", |
|
|
162 |
" ppg, accx, accy, accz = sigs[i]\n", |
|
|
163 |
" \n", |
|
|
164 |
" ppg = Filter(ppg) \n", |
|
|
165 |
" accx = Filter(accx)\n", |
|
|
166 |
" accy = Filter(accy)\n", |
|
|
167 |
" accz = Filter(accz) \n", |
|
|
168 |
" \n", |
|
|
169 |
" n = len(ppg) * 3\n", |
|
|
170 |
" freq = np.fft.rfftfreq(n, 1/Fs)\n", |
|
|
171 |
" fft = np.abs(np.fft.rfft(ppg,n))\n", |
|
|
172 |
" fft[freq <= 40/60.0] = 0.0\n", |
|
|
173 |
" fft[freq >= 240/60.0] = 0.0\n", |
|
|
174 |
" \n", |
|
|
175 |
" est_Fs = est / 55.0\n", |
|
|
176 |
" Fs_win = 30 / 60.0\n", |
|
|
177 |
" Fs_win_e = (freq >= est_Fs - Fs_win) & (freq <= est_Fs +Fs_win)\n", |
|
|
178 |
" conf = np.sum(fft[Fs_win_e])/np.sum(fft)\n", |
|
|
179 |
" \n", |
|
|
180 |
" error.append(np.abs((est-targets[i])))\n", |
|
|
181 |
" confidence.append(conf)\n", |
|
|
182 |
" return np.array(error), np.array(confidence)\n", |
|
|
183 |
"\n", |
|
|
184 |
"def Data_window8(data_fl, ref_fl):\n", |
|
|
185 |
" \n", |
|
|
186 |
" \"\"\"\n", |
|
|
187 |
" Load and prepare the data, based on windows length and shift and filters \n", |
|
|
188 |
" \n", |
|
|
189 |
" Args: \n", |
|
|
190 |
" data_fls: Names of the .mat files that contain signal data\n", |
|
|
191 |
" ref_fls: Names of the .mat files that contain reference data\n", |
|
|
192 |
" \n", |
|
|
193 |
"\n", |
|
|
194 |
" Returns:\n", |
|
|
195 |
" np.array(targets): Array with targets\n", |
|
|
196 |
" np.array(features): Array with features\n", |
|
|
197 |
" sigs: treated signals\n", |
|
|
198 |
" subs: name of file (or subject)\n", |
|
|
199 |
" \"\"\"\n", |
|
|
200 |
" \n", |
|
|
201 |
" Fs=125 # Sampling frequency\n", |
|
|
202 |
" window_len = 6 # Window to calculate PR\n", |
|
|
203 |
" window_shift = 2 # Difference between windows\n", |
|
|
204 |
" \n", |
|
|
205 |
" sig = LoadTroikaDataFile(data_fl)\n", |
|
|
206 |
" ref = scipy.io.loadmat(ref_fl)[\"BPM0\"]\n", |
|
|
207 |
" ref = np.array([x[0] for x in ref])\n", |
|
|
208 |
" subject_name = os.path.basename(data_fl).split('.')[0] \n", |
|
|
209 |
" start_indxs, end_indxs = Indexator(sig.shape[1], len(ref), Fs, window_len,window_shift)\n", |
|
|
210 |
" targets, features, sigs, subs = [], [], [], []\n", |
|
|
211 |
" for i, s in enumerate(start_indxs):\n", |
|
|
212 |
" start_i = start_indxs[i]\n", |
|
|
213 |
" end_i = end_indxs[i]\n", |
|
|
214 |
"\n", |
|
|
215 |
" ppg = sig[0, start_i:end_i] \n", |
|
|
216 |
" accx = sig[1, start_i:end_i]\n", |
|
|
217 |
" accy = sig[2, start_i:end_i]\n", |
|
|
218 |
" accz = sig[3, start_i:end_i]\n", |
|
|
219 |
"\n", |
|
|
220 |
" ppg = Filter(ppg)\n", |
|
|
221 |
" accx = Filter(accx)\n", |
|
|
222 |
" accy = Filter(accy)\n", |
|
|
223 |
" accz = Filter(accz)\n", |
|
|
224 |
"\n", |
|
|
225 |
" feature, ppg, accx, accy, accz = CreateFeature(ppg, accx, accy, accz)\n", |
|
|
226 |
"\n", |
|
|
227 |
" sigs.append([ppg, accx, accy, accz])\n", |
|
|
228 |
" targets.append(ref[i])\n", |
|
|
229 |
" features.append(feature)\n", |
|
|
230 |
" subs.append(subject_name)\n", |
|
|
231 |
" \n", |
|
|
232 |
" return (np.array(targets), np.array(features), sigs, subs)\n", |
|
|
233 |
"\n", |
|
|
234 |
"def Data_window6():\n", |
|
|
235 |
" \n", |
|
|
236 |
" \"\"\"\n", |
|
|
237 |
" Load and prepare the data, based on windows length and shift and filters \n", |
|
|
238 |
" \n", |
|
|
239 |
" Args: \n", |
|
|
240 |
" data_fls: Names of the .mat files that contain signal data\n", |
|
|
241 |
" ref_fls: Names of the .mat files that contain reference data\n", |
|
|
242 |
" \n", |
|
|
243 |
"\n", |
|
|
244 |
" Returns:\n", |
|
|
245 |
" np.array(targets): Array with targets\n", |
|
|
246 |
" np.array(features): Array with features\n", |
|
|
247 |
" sigs: treated signals\n", |
|
|
248 |
" subs: name of file (or subject)\n", |
|
|
249 |
" \"\"\"\n", |
|
|
250 |
" \n", |
|
|
251 |
" Fs=125 # Sampling rate \n", |
|
|
252 |
" window_len = 6 # Window to calculate PR\n", |
|
|
253 |
" window_shift = 2 # Difference between windows\n", |
|
|
254 |
" \n", |
|
|
255 |
" data_fls, ref_fls = LoadTroikaDataset()\n", |
|
|
256 |
" pbar = tqdm(list(zip(data_fls, ref_fls)), desc=\"Prepare Data\")\n", |
|
|
257 |
" targets, features, sigs, subs = [], [], [], []\n", |
|
|
258 |
" for data_fl, ref_fl in pbar:\n", |
|
|
259 |
" sig = LoadTroikaDataFile(data_fl)\n", |
|
|
260 |
" ref = scipy.io.loadmat(ref_fl)[\"BPM0\"]\n", |
|
|
261 |
" ref = np.array([x[0] for x in ref])\n", |
|
|
262 |
" subject_name = os.path.basename(data_fl).split('.')[0] \n", |
|
|
263 |
" start_indxs, end_indxs = Indexator(sig.shape[1], len(ref), Fs, window_len,window_shift)\n", |
|
|
264 |
" for i, s in enumerate(start_indxs):\n", |
|
|
265 |
" start_i = start_indxs[i]\n", |
|
|
266 |
" end_i = end_indxs[i]\n", |
|
|
267 |
"\n", |
|
|
268 |
" ppg = sig[0, start_i:end_i] \n", |
|
|
269 |
" accx = sig[1, start_i:end_i]\n", |
|
|
270 |
" accy = sig[2, start_i:end_i]\n", |
|
|
271 |
" accz = sig[3, start_i:end_i]\n", |
|
|
272 |
"\n", |
|
|
273 |
"\n", |
|
|
274 |
" ppg = Filter(ppg)\n", |
|
|
275 |
" accx = Filter(accx)\n", |
|
|
276 |
" accy = Filter(accy)\n", |
|
|
277 |
" accz = Filter(accz)\n", |
|
|
278 |
"\n", |
|
|
279 |
" feature, ppg, accx, accy, accz = CreateFeature(ppg, accx, accy, accz)\n", |
|
|
280 |
"\n", |
|
|
281 |
" sigs.append([ppg, accx, accy, accz])\n", |
|
|
282 |
" targets.append(ref[i])\n", |
|
|
283 |
" features.append(feature)\n", |
|
|
284 |
" subs.append(subject_name)\n", |
|
|
285 |
" \n", |
|
|
286 |
" return (np.array(targets), np.array(features), sigs, subs)\n", |
|
|
287 |
"\n", |
|
|
288 |
"def CreateFeature(ppg, accx, accy, accz):\n", |
|
|
289 |
" \"\"\" Create features \n", |
|
|
290 |
" \n", |
|
|
291 |
" Args: \n", |
|
|
292 |
" ppg, accx, accy, accz: signals\n", |
|
|
293 |
" \n", |
|
|
294 |
"\n", |
|
|
295 |
" Returns:\n", |
|
|
296 |
" np.array([ppg_feature, acc_feature]): features from PPG and ACC signals\n", |
|
|
297 |
" ppg, accx, accy, accz: signals\n", |
|
|
298 |
" \n", |
|
|
299 |
" \"\"\"\n", |
|
|
300 |
" ppg = Filter(ppg)\n", |
|
|
301 |
" accx = Filter(accx)\n", |
|
|
302 |
" accy = Filter(accy)\n", |
|
|
303 |
" accz = Filter(accz)\n", |
|
|
304 |
" \n", |
|
|
305 |
" \n", |
|
|
306 |
" Fs = 125\n", |
|
|
307 |
" n = len(ppg) * 4\n", |
|
|
308 |
" freq = np.fft.rfftfreq(n, 1/Fs)\n", |
|
|
309 |
" fft = np.abs(np.fft.rfft(ppg,n))\n", |
|
|
310 |
" fft[freq <= 40/60.0] = 0.0\n", |
|
|
311 |
" fft[freq >= 240/60.0] = 0.0\n", |
|
|
312 |
" \n", |
|
|
313 |
" acct = np.sqrt(accx**2 + accy**2 + accz**2) # Total signal of acc\n", |
|
|
314 |
" \n", |
|
|
315 |
" acc_fft = np.abs(np.fft.rfft(acct, n))\n", |
|
|
316 |
" acc_fft[freq <= 40/60.0] = 0.0\n", |
|
|
317 |
" acc_fft[freq >= 240/60.0] = 0.0\n", |
|
|
318 |
" \n", |
|
|
319 |
" ppg_feature = freq[np.argmax(fft)]\n", |
|
|
320 |
" acc_feature = freq[np.argmax(acc_fft)]\n", |
|
|
321 |
" \n", |
|
|
322 |
" return (np.array([ppg_feature, acc_feature]), ppg, accx, accy, accz)\n", |
|
|
323 |
"\n", |
|
|
324 |
"def RegressionAlg(features, targets, subs):\n", |
|
|
325 |
" \"\"\" \n", |
|
|
326 |
" \n", |
|
|
327 |
" Create the regression model\n", |
|
|
328 |
" \n", |
|
|
329 |
" Args: \n", |
|
|
330 |
" features: Features obtained from Data_window()\n", |
|
|
331 |
" targets: Targets obtained from Data_window()\n", |
|
|
332 |
" subs: individuals \n", |
|
|
333 |
" \n", |
|
|
334 |
"\n", |
|
|
335 |
" Returns:\n", |
|
|
336 |
" regression: result from regression\n", |
|
|
337 |
" score: scores from the regression\n", |
|
|
338 |
" \n", |
|
|
339 |
" \n", |
|
|
340 |
" \"\"\"\n", |
|
|
341 |
" AdaBoostRegressor\n", |
|
|
342 |
" regression = RandomForestRegressor(n_estimators=400,max_depth=16)\n", |
|
|
343 |
" scores = []\n", |
|
|
344 |
" lf = KFold(n_splits=5)\n", |
|
|
345 |
" splits = lf.split(features,targets,subs)\n", |
|
|
346 |
" for i, (train_idx, test_idx) in enumerate(splits):\n", |
|
|
347 |
" X_train, y_train = features[train_idx], targets[train_idx]\n", |
|
|
348 |
" X_test, y_test = features[test_idx], targets[test_idx]\n", |
|
|
349 |
" regression.fit(X_train, y_train)\n", |
|
|
350 |
" y_pred = regression.Predict(X_test)\n", |
|
|
351 |
" score = Error(y_test, y_pred)\n", |
|
|
352 |
" scores.append(score)\n", |
|
|
353 |
" \n", |
|
|
354 |
" return (regression, scores)\n", |
|
|
355 |
"\n", |
|
|
356 |
"def Filter(signal):\n", |
|
|
357 |
" \n", |
|
|
358 |
" \"\"\" \n", |
|
|
359 |
" \n", |
|
|
360 |
" Bandpass filter between 40 and 240 BPM\n", |
|
|
361 |
" \n", |
|
|
362 |
" Args: \n", |
|
|
363 |
" signal: signal to be filtered\n", |
|
|
364 |
" \n", |
|
|
365 |
"\n", |
|
|
366 |
" Returns:\n", |
|
|
367 |
" signal:filtered signal\n", |
|
|
368 |
" b,a: components from filter\n", |
|
|
369 |
" \n", |
|
|
370 |
" \n", |
|
|
371 |
" \"\"\"\n", |
|
|
372 |
" \n", |
|
|
373 |
" pass_band=(40/60.0, 240/60.0)\n", |
|
|
374 |
" Fs = 125\n", |
|
|
375 |
" b, a = scipy.signal.butter(3, pass_band, btype='bandpass', fs=Fs)\n", |
|
|
376 |
" return scipy.signal.filtfilt(b, a, signal)\n", |
|
|
377 |
"\n", |
|
|
378 |
"def Indexator(sig_len, ref_len, Fs=125, window_len_s=10, window_shift_s=2):\n", |
|
|
379 |
" \"\"\"\n", |
|
|
380 |
" Find start and end index to iterate over a set of signals\n", |
|
|
381 |
" \n", |
|
|
382 |
" Args: \n", |
|
|
383 |
" sig_len, ref_len: signal and reference lenght\n", |
|
|
384 |
" Fs, window_len_s, window_shift: sample frequency, window lenght and window shift\n", |
|
|
385 |
" \n", |
|
|
386 |
"\n", |
|
|
387 |
" Returns:\n", |
|
|
388 |
" start_indxs: start index\n", |
|
|
389 |
" end_indxs: end index\n", |
|
|
390 |
" \"\"\"\n", |
|
|
391 |
" # Set the length of the biggest signal with regards to the reference signal\n", |
|
|
392 |
" if ref_len < sig_len:\n", |
|
|
393 |
" n = ref_len\n", |
|
|
394 |
" else:\n", |
|
|
395 |
" n = sig_len\n", |
|
|
396 |
" \n", |
|
|
397 |
" # Start Indexes \n", |
|
|
398 |
" start_indxs = (np.cumsum(np.ones(n) * Fs * window_shift_s) - Fs * window_shift_s).astype(int)\n", |
|
|
399 |
" \n", |
|
|
400 |
" # End Indexes (same size as the start indexes array)\n", |
|
|
401 |
" end_indxs = start_indxs + window_len_s * Fs\n", |
|
|
402 |
" return (start_indxs, end_indxs)\n", |
|
|
403 |
"\n", |
|
|
404 |
"def Predict(reg,feature, ppg, accx, accy, accz):\n", |
|
|
405 |
" \"\"\"\n", |
|
|
406 |
" Create the prediction based on the regression algorithm\n", |
|
|
407 |
" \"\"\"\n", |
|
|
408 |
" est = reg.predict(np.reshape(feature, (1, -1)))[0]\n", |
|
|
409 |
" \n", |
|
|
410 |
"def Error(y_test, y_pred):\n", |
|
|
411 |
" \"\"\"\n", |
|
|
412 |
" Calculate error score of the prediction\n", |
|
|
413 |
" \"\"\"\n", |
|
|
414 |
" return mean_squared_error(y_test, y_pred)\n", |
|
|
415 |
"\n", |
|
|
416 |
"\n", |
|
|
417 |
"def Regressor():\n", |
|
|
418 |
" \"\"\"\n", |
|
|
419 |
" Apply regression\n", |
|
|
420 |
" \"\"\"\n", |
|
|
421 |
" fname = \"outfile.npy\"\n", |
|
|
422 |
" reg, scores = [], []\n", |
|
|
423 |
" if os.path.isfile(fname):\n", |
|
|
424 |
" [reg,scores] = np.load(fname,allow_pickle=True)\n", |
|
|
425 |
" #\n", |
|
|
426 |
" else:\n", |
|
|
427 |
" targets, features, sigs, subs = Data_window6()\n", |
|
|
428 |
" reg, scores = RegressionAlg(features, targets, subs)\n", |
|
|
429 |
" np.save(\"outfile\", [reg,scores])\n", |
|
|
430 |
" return reg, scores" |
|
|
431 |
] |
|
|
432 |
}, |
|
|
433 |
{ |
|
|
434 |
"cell_type": "code", |
|
|
435 |
"execution_count": 42, |
|
|
436 |
"metadata": {}, |
|
|
437 |
"outputs": [ |
|
|
438 |
{ |
|
|
439 |
"data": { |
|
|
440 |
"text/plain": [ |
|
|
441 |
"9.6607033250413767" |
|
|
442 |
] |
|
|
443 |
}, |
|
|
444 |
"execution_count": 42, |
|
|
445 |
"metadata": {}, |
|
|
446 |
"output_type": "execute_result" |
|
|
447 |
} |
|
|
448 |
], |
|
|
449 |
"source": [ |
|
|
450 |
"Evaluate()" |
|
|
451 |
] |
|
|
452 |
}, |
|
|
453 |
{ |
|
|
454 |
"cell_type": "markdown", |
|
|
455 |
"metadata": {}, |
|
|
456 |
"source": [ |
|
|
457 |
"-----\n", |
|
|
458 |
"### Project Write-up\n", |
|
|
459 |
"\n", |
|
|
460 |
"Answer the following prompts to demonstrate understanding of the algorithm you wrote for this specific context.\n", |
|
|
461 |
"\n", |
|
|
462 |
"> - **Code Description** - Include details so someone unfamiliar with your project will know how to run your code and use your algorithm. \n", |
|
|
463 |
"> - **Data Description** - Describe the dataset that was used to train and test the algorithm. Include its short-comings and what data would be required to build a more complete dataset.\n", |
|
|
464 |
"> - **Algorithhm Description** will include the following:\n", |
|
|
465 |
"> - how the algorithm works\n", |
|
|
466 |
"> - the specific aspects of the physiology that it takes advantage of\n", |
|
|
467 |
"> - a describtion of the algorithm outputs\n", |
|
|
468 |
"> - caveats on algorithm outputs \n", |
|
|
469 |
"> - common failure modes\n", |
|
|
470 |
"> - **Algorithm Performance** - Detail how performance was computed (eg. using cross-validation or train-test split) and what metrics were optimized for. Include error metrics that would be relevant to users of your algorithm. Caveat your performance numbers by acknowledging how generalizable they may or may not be on different datasets.\n", |
|
|
471 |
"\n", |
|
|
472 |
"Your write-up goes here..." |
|
|
473 |
] |
|
|
474 |
}, |
|
|
475 |
{ |
|
|
476 |
"cell_type": "markdown", |
|
|
477 |
"metadata": {}, |
|
|
478 |
"source": [ |
|
|
479 |
"**Code description**\n", |
|
|
480 |
"\n", |
|
|
481 |
"The algorithm that predicts heart rate in BPM is based on the estimation of PPG signals and ACC. As requested by Udacity, I used only the second PPG signal (signals were measured on both wrists). The available code returns the mean absolute error and confidence. \n" |
|
|
482 |
] |
|
|
483 |
}, |
|
|
484 |
{ |
|
|
485 |
"cell_type": "markdown", |
|
|
486 |
"metadata": {}, |
|
|
487 |
"source": [ |
|
|
488 |
"**Data description**\n", |
|
|
489 |
"\n", |
|
|
490 |
"Data Description The Troika data set is used to build the algorithm (https://arxiv.org/pdf/1409.5181.pdf). We had the following signals to work on:\n", |
|
|
491 |
"- ECG signal \n", |
|
|
492 |
"- PPG two signals from each wrist \n", |
|
|
493 |
"- Three channels fro the accelerometer each one corresponding to (x,y and z).\n", |
|
|
494 |
"- Data is sampled at 125Hz." |
|
|
495 |
] |
|
|
496 |
}, |
|
|
497 |
{ |
|
|
498 |
"cell_type": "markdown", |
|
|
499 |
"metadata": {}, |
|
|
500 |
"source": [ |
|
|
501 |
"**How the algorithm works**\n", |
|
|
502 |
"\n", |
|
|
503 |
"The algorithm uses RandomForestRegressor to fit the heart rate training data.\n", |
|
|
504 |
"\n", |
|
|
505 |
"- The specific aspects of the physiology that it takes advantage of:\n", |
|
|
506 |
"\n", |
|
|
507 |
"Two PPG signals were used (from each wrist). As explained in the course, the capillaries in the wrist fill with blood when the heart's ventricles contract. When the blood returns to the heart there are fewer blood cells in the wrist. The PPG sensor emits a green light which can be absorbed by the red blood cells and the photodetector will see the various levels of blood flow in the reflected light. When the blood returns to the heart fewer blood cells can absorb the green light and the photo detector can see an increase in the reflected light.\n", |
|
|
508 |
"\n", |
|
|
509 |
"- A description of the algorithm outputs:\n", |
|
|
510 |
"\n", |
|
|
511 |
"The algorithm returns BPM and the confidence rate of this prediction. The higher the confidence rate, the higher we can trust the prediction. \n", |
|
|
512 |
"\n", |
|
|
513 |
"- Caveats on algorithm outputs\n", |
|
|
514 |
"\n", |
|
|
515 |
"The confidence rate is calculated based on the magnitude of a small area that contains the estimated spectral frequency relative to the sum magnitude of the entire spectrum.\n", |
|
|
516 |
"\n", |
|
|
517 |
"- Common failure modes \n", |
|
|
518 |
"PPG might pick higher frequencies related to motion. To deal with this problem, we take into consideration the accelerometer values.\n", |
|
|
519 |
"\n" |
|
|
520 |
] |
|
|
521 |
}, |
|
|
522 |
{ |
|
|
523 |
"cell_type": "markdown", |
|
|
524 |
"metadata": {}, |
|
|
525 |
"source": [ |
|
|
526 |
"**Algorithm Performance**\n", |
|
|
527 |
"\n", |
|
|
528 |
"The mean absolute error was calculated and the ground truth reference signal was obtained from the ECG sensors. For cross validation I used the KFold. The error rate was around ~8 BPM on the test set. Since the data is using only limited subjects the algorithm may not be able to generalize well." |
|
|
529 |
] |
|
|
530 |
}, |
|
|
531 |
{ |
|
|
532 |
"cell_type": "code", |
|
|
533 |
"execution_count": null, |
|
|
534 |
"metadata": {}, |
|
|
535 |
"outputs": [], |
|
|
536 |
"source": [] |
|
|
537 |
} |
|
|
538 |
], |
|
|
539 |
"metadata": { |
|
|
540 |
"kernelspec": { |
|
|
541 |
"display_name": "Python 3", |
|
|
542 |
"language": "python", |
|
|
543 |
"name": "python3" |
|
|
544 |
}, |
|
|
545 |
"language_info": { |
|
|
546 |
"codemirror_mode": { |
|
|
547 |
"name": "ipython", |
|
|
548 |
"version": 3 |
|
|
549 |
}, |
|
|
550 |
"file_extension": ".py", |
|
|
551 |
"mimetype": "text/x-python", |
|
|
552 |
"name": "python", |
|
|
553 |
"nbconvert_exporter": "python", |
|
|
554 |
"pygments_lexer": "ipython3", |
|
|
555 |
"version": "3.6.3" |
|
|
556 |
} |
|
|
557 |
}, |
|
|
558 |
"nbformat": 4, |
|
|
559 |
"nbformat_minor": 4 |
|
|
560 |
} |