Switch to unified view

a/README.md b/README.md
1
<p align="center"> 
1
2
  <img src="images/Project Logo.png" alt="HAR Logo" width="80px" height="80px">
3
</p>
4
<h1 align="center"> Human Activity Recognition </h1>
2
<h1 align="center"> Human Activity Recognition </h1>
5
<h3 align="center"> A Comparative Study between Different Pre-processing Approaches and Classifiers </h3>  
3
<h3 align="center"> A Comparative Study between Different Pre-processing Approaches and Classifiers </h3>  
6
4
7
</br>
5
</br>
8
6
9
<p align="center"> 
10
  <img src="images/Signal.gif" alt="Sample signal" width="70%" height="70%">
11
</p>
12
13
<!-- TABLE OF CONTENTS -->
7
<!-- TABLE OF CONTENTS -->
14
<h2 id="table-of-contents"> :book: Table of Contents</h2>
8
<h2 id="table-of-contents"> :book: Table of Contents</h2>
15
9
16
<details open="open">
10
<details open="open">
17
  <summary>Table of Contents</summary>
11
  <summary>Table of Contents</summary>
18
  <ol>
12
  <ol>
19
    <li><a href="#about-the-project"> ➤ About The Project</a></li>
13
    <li><a href="#about-the-project"> ➤ About The Project</a></li>
20
    <li><a href="#prerequisites"> ➤ Prerequisites</a></li>
14
    <li><a href="#prerequisites"> ➤ Prerequisites</a></li>
21
    <li><a href="#folder-structure"> ➤ Folder Structure</a></li>
15
    <li><a href="#folder-structure"> ➤ Folder Structure</a></li>
22
    <li><a href="#dataset"> ➤ Dataset</a></li>
16
    <li><a href="#dataset"> ➤ Dataset</a></li>
23
    <li><a href="#roadmap"> ➤ Roadmap</a></li>
17
    <li><a href="#roadmap"> ➤ Roadmap</a></li>
24
    <li>
18
    <li>
25
      <a href="#preprocessing"> ➤ Preprocessing</a>
19
      <a href="#preprocessing"> ➤ Preprocessing</a>
26
      <ul>
20
      <ul>
27
        <li><a href="#preprocessed-data">Pre-processed data</a></li>
21
        <li><a href="#preprocessed-data">Pre-processed data</a></li>
28
        <li><a href="#statistical-feature">Statistical feature</a></li>
22
        <li><a href="#statistical-feature">Statistical feature</a></li>
29
        <li><a href="#topological-feature">Topological feature</a></li>
23
        <li><a href="#topological-feature">Topological feature</a></li>
30
      </ul>
24
      </ul>
31
    </li>
25
    </li>
32
    <!--<li><a href="#experiments">Experiments</a></li>-->
26
    <!--<li><a href="#experiments">Experiments</a></li>-->
33
    <li><a href="#results-and-discussion"> ➤ Results and Discussion</a></li>
27
    <li><a href="#results-and-discussion"> ➤ Results and Discussion</a></li>
34
    <li><a href="#references"> ➤ References</a></li>
28
    <li><a href="#references"> ➤ References</a></li>
35
    <li><a href="#contributors"> ➤ Contributors</a></li>
29
    <li><a href="#contributors"> ➤ Contributors</a></li>
36
  </ol>
30
  </ol>
37
</details>
31
</details>
38
32
39
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/rainbow.png)
33
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/rainbow.png?raw=true)
40
34
41
<!-- ABOUT THE PROJECT -->
35
<!-- ABOUT THE PROJECT -->
42
<h2 id="about-the-project"> :pencil: About The Project</h2>
36
<h2 id="about-the-project"> :pencil: About The Project</h2>
43
37
44
<p align="justify"> 
38
<p align="justify"> 
45
  This project focuses on classifying human activities using data collected from accelerometer and gyroscope sensors on phones and watches. The raw sensor data will undergo preprocessing through two distinct methods: topological data analysis and statistical feature extraction from segmented time series. The aim is to compare and assess the performance of various classifiers, including Decision Tree, k-Nearest Neighbors, Random Forest, SVM, and CNN, trained on the two differently preprocessed datasets.
39
  This project focuses on classifying human activities using data collected from accelerometer and gyroscope sensors on phones and watches. The raw sensor data will undergo preprocessing through two distinct methods: topological data analysis and statistical feature extraction from segmented time series. The aim is to compare and assess the performance of various classifiers, including Decision Tree, k-Nearest Neighbors, Random Forest, SVM, and CNN, trained on the two differently preprocessed datasets.
46
</p>
40
</p>
47
41
48
<p align="center">
42
49
  <img src="images/WISDM Activities.png" alt="Table1: 18 Activities" width="70%" height="70%">        
43
50
  <!--figcaption>Caption goes here</figcaption-->
51
</p>
52
53
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/rainbow.png)
44
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/rainbow.png?raw=true)
54
45
55
<!-- PREREQUISITES -->
46
<!-- PREREQUISITES -->
56
<h2 id="prerequisites"> :fork_and_knife: Prerequisites</h2>
47
<h2 id="prerequisites"> :fork_and_knife: Prerequisites</h2>
57
48
58
[![made-with-python](https://img.shields.io/badge/Made%20with-Python-1f425f.svg)](https://www.python.org/) <br>
49
[![made-with-python](https://img.shields.io/badge/Made%20with-Python-1f425f.svg)](https://www.python.org/) <br>
59
[![Made withJupyter](https://img.shields.io/badge/Made%20with-Jupyter-orange?style=for-the-badge&logo=Jupyter)](https://jupyter.org/try) <br>
50
[![Made withJupyter](https://img.shields.io/badge/Made%20with-Jupyter-orange?style=for-the-badge&logo=Jupyter)](https://jupyter.org/try) <br>
60
51
61
<!--This project is written in Python programming language. <br>-->
52
<!--This project is written in Python programming language. <br>-->
62
The following open source packages are used in this project:
53
The following open source packages are used in this project:
63
* Numpy
54
* Numpy
64
* Pandas
55
* Pandas
65
* Matplotlib
56
* Matplotlib
66
* Scikit-Learn
57
* Scikit-Learn
67
* Scikit-tda
58
* Scikit-tda
68
* Giotto-tda
59
* Giotto-tda
69
* TensorFlow
60
* TensorFlow
70
* Keras
61
* Keras
71
62
72
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/rainbow.png)
63
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/rainbow.png?raw=true)
73
64
74
<!-- :paw_prints:-->
65
<!-- :paw_prints:-->
75
<!-- FOLDER STRUCTURE -->
66
<!-- FOLDER STRUCTURE -->
76
<h2 id="folder-structure"> :cactus: Folder Structure</h2>
67
<h2 id="folder-structure"> :cactus: Folder Structure</h2>
77
68
78
    code
69
    code
79
    .
70
    .
80
71

81
    ├── data
72
    ├── data
82
    │   ├── raw_data
73
    │   ├── raw_data
83
    │   │   ├── phone
74
    │   │   ├── phone
84
    │   │   │   ├── accel
75
    │   │   │   ├── accel
85
    │   │   │   └── gyro
76
    │   │   │   └── gyro
86
    │   │   ├── watch
77
    │   │   ├── watch
87
    │   │       ├── accel
78
    │   │       ├── accel
88
    │   │       └── gyro
79
    │   │       └── gyro
89
    │   │
80
    │   │
90
    │   ├── transformed_data
81
    │   ├── transformed_data
91
    │   │   ├── phone
82
    │   │   ├── phone
92
    │   │   │   ├── accel
83
    │   │   │   ├── accel
93
    │   │   │   └── gyro
84
    │   │   │   └── gyro
94
    │   │   ├── watch
85
    │   │   ├── watch
95
    │   │       ├── accel
86
    │   │       ├── accel
96
    │   │       └── gyro
87
    │   │       └── gyro
97
    │   │
88
    │   │
98
    │   ├── feature_label_tables
89
    │   ├── feature_label_tables
99
    │   │    ├── feature_phone_accel
90
    │   │    ├── feature_phone_accel
100
    │   │    ├── feature_phone_gyro
91
    │   │    ├── feature_phone_gyro
101
    │   │    ├── feature_watch_accel
92
    │   │    ├── feature_watch_accel
102
    │   │    ├── feature_watch_gyro
93
    │   │    ├── feature_watch_gyro
103
    │   │
94
    │   │
104
    │   ├── wisdm-dataset
95
    │   ├── wisdm-dataset
105
    │        ├── raw
96
    │        ├── raw
106
    │        │   ├── phone
97
    │        │   ├── phone
107
    │        │   ├── accel
98
    │        │   ├── accel
108
    │        │   └── gyro
99
    │        │   └── gyro
109
    │        ├── watch
100
    │        ├── watch
110
    │            ├── accel
101
    │            ├── accel
111
    │            └── gyro
102
    │            └── gyro
112
103

113
    ├── CNN_Impersonal_TransformedData.ipynb
104
    ├── CNN_Impersonal_TransformedData.ipynb
114
    ├── CNN_Personal_TransformedData.ipynb  
105
    ├── CNN_Personal_TransformedData.ipynb  
115
    ├── CNN_Impersonal_RawData.ipynb    
106
    ├── CNN_Impersonal_RawData.ipynb    
116
    ├── CNN_Personal_RawData.ipynb 
107
    ├── CNN_Personal_RawData.ipynb 
117
    ├── Classifier_SVM_Personal.ipynb
108
    ├── Classifier_SVM_Personal.ipynb
118
    ├── Classifier_SVM_Impersonal.ipynb
109
    ├── Classifier_SVM_Impersonal.ipynb
119
    ├── statistical_analysis_time_domain.py
110
    ├── statistical_analysis_time_domain.py
120
    ├── Topological data analysis.ipynb  
111
    ├── Topological data analysis.ipynb  
121
112
122
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/rainbow.png)
113
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/rainbow.png?raw=true)
123
114
124
<!-- DATASET -->
115
<!-- DATASET -->
125
<h2 id="dataset"> :floppy_disk: Dataset</h2>
116
<h2 id="dataset"> :floppy_disk: Dataset</h2>
126
<p> 
117
<p> 
127
  The WISDM (Wireless Sensor Data Mining) dataset includes raw time-series data collected from accelerometer and gyroscope sensors of a smartphone and smartwatch with their corresponding labels for each activity. The sensor data was collected at a rate of 20 Hz (i.e., every 50ms). Weiss et.al., collected this dataset from 51 subjects who performed 18 different activities listed in Table 2, each for 3 minutes, while having the smartphone in their right pant pocket and wearing the smartwatch in their dominant hand. Each line of the time-series sensor file is considered as input.
118
  The WISDM (Wireless Sensor Data Mining) dataset includes raw time-series data collected from accelerometer and gyroscope sensors of a smartphone and smartwatch with their corresponding labels for each activity. The sensor data was collected at a rate of 20 Hz (i.e., every 50ms). Weiss et.al., collected this dataset from 51 subjects who performed 18 different activities listed in Table 2, each for 3 minutes, while having the smartphone in their right pant pocket and wearing the smartwatch in their dominant hand. Each line of the time-series sensor file is considered as input.
128
119
129
<p align="center">
120
130
  <img src="images/Human Activity.gif" alt="Human Activity.gif" display="inline-block" width="60%" height="50%">
121
131
</p>
122
132
133
134
 _The WISDM dataset is publicly available. Please refer to the [Link](https://archive.ics.uci.edu/ml/datasets/WISDM+Smartphone+and+Smartwatch+Activity+and+Biometrics+Dataset+)_ 
123
 _The WISDM dataset is publicly available. Please refer to the [Link](https://archive.ics.uci.edu/ml/datasets/WISDM+Smartphone+and+Smartwatch+Activity+and+Biometrics+Dataset+)_ 
135
124
136
  The following table shows the 18 activities represented in data set.
125
  The following table shows the 18 activities represented in data set.
137
</p>
126
</p>
138
127
139
<p align="center">
128
140
  <img src="images/Activity Table.png" alt="Table1: 18 Activities" width="45%" height="45%">
141
</p>
142
143
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/rainbow.png)
129
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/rainbow.png?raw=true)
144
130
145
<!-- ROADMAP -->
131
<!-- ROADMAP -->
146
<h2 id="roadmap"> :dart: Roadmap</h2>
132
<h2 id="roadmap"> :dart: Roadmap</h2>
147
133
148
<p align="justify"> 
134
<p align="justify"> 
149
  Weiss et. al. has trained three models namely Decision Tree, k-Nearest Neighbors, and Random Forest for human activity classification by preprocessing the raw time series data using statistical feature extraction from segmented time series. 
135
  Weiss et. al. has trained three models namely Decision Tree, k-Nearest Neighbors, and Random Forest for human activity classification by preprocessing the raw time series data using statistical feature extraction from segmented time series. 
150
  The goals of this project include the following:
136
  The goals of this project include the following:
151
<ol>
137
<ol>
152
  <li>
138
  <li>
153
    <p align="justify"> 
139
    <p align="justify"> 
154
      Train the same models - Decision Tree, k Nearest Neighbors, and Random Forest using the preprocessed data obtained from topological data analysis and compare the
140
      Train the same models - Decision Tree, k Nearest Neighbors, and Random Forest using the preprocessed data obtained from topological data analysis and compare the
155
      performance against the results obtained by Weiss et. al.
141
      performance against the results obtained by Weiss et. al.
156
    </p>
142
    </p>
157
  </li>
143
  </li>
158
  <li>
144
  <li>
159
    <p align="justify"> 
145
    <p align="justify"> 
160
      Train SVM and CNN using the preprocessed data generated by Weiss et. al. and evaluate the performance against their Decision Tree, k Nearest Neighbors, and Random Forest models.
146
      Train SVM and CNN using the preprocessed data generated by Weiss et. al. and evaluate the performance against their Decision Tree, k Nearest Neighbors, and Random Forest models.
161
    </p>
147
    </p>
162
  </li>
148
  </li>
163
</ol>
149
</ol>
164
</p>
150
</p>
165
151
166
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/rainbow.png)
152
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/rainbow.png?raw=true)
167
153
168
<!-- PREPROCESSING -->
154
<!-- PREPROCESSING -->
169
<h2 id="preprocessing"> :hammer: Preprocessing</h2>
155
<h2 id="preprocessing"> :hammer: Preprocessing</h2>
170
156
171
<p align="justify"> 
157
<p align="justify"> 
172
  The WISDM (Wireless Sensor Data Mining) dataset includes raw time-series data collected from accelerometer and gyroscope sensors of a smartphone and smartwatch with their corresponding labels for each activity. The sensor data was collected at a rate of 20 Hz (i.e., every 50ms). Weiss et.al., collected this dataset from 51 subjects who performed 18 different activities listed in the previous table, each for 3 minutes, while having the smartphone in their right pant pocket and wearing the smartwatch in their dominant hand. <br>
158
  The WISDM (Wireless Sensor Data Mining) dataset includes raw time-series data collected from accelerometer and gyroscope sensors of a smartphone and smartwatch with their corresponding labels for each activity. The sensor data was collected at a rate of 20 Hz (i.e., every 50ms). Weiss et.al., collected this dataset from 51 subjects who performed 18 different activities listed in the previous table, each for 3 minutes, while having the smartphone in their right pant pocket and wearing the smartwatch in their dominant hand. <br>
173
  In this project we tried three different feature sets, extracted from the raw data, which are as follows: 
159
  In this project we tried three different feature sets, extracted from the raw data, which are as follows: 
174
  <ol>
160
  <ol>
175
    <li><b>Pre-processed data</b> generated by Weiss et. al.</li> 
161
    <li><b>Pre-processed data</b> generated by Weiss et. al.</li> 
176
    <li><b>Statistical feature extraction</b></li>
162
    <li><b>Statistical feature extraction</b></li>
177
    <li><b>Topological feature extraction</b></li>
163
    <li><b>Topological feature extraction</b></li>
178
  </ol>
164
  </ol>
179
  
165
  
180
All these three approaches used windowing technique to segment the raw time series and extract features from each segment.
166
All these three approaches used windowing technique to segment the raw time series and extract features from each segment.
181
167
182
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/rainbow.png)
168
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/rainbow.png?raw=true)
183
169
184
<!-- PRE-PROCESSED DATA -->
170
<!-- PRE-PROCESSED DATA -->
185
<h2 id="preprocessed-data"> :diamond_shape_with_a_dot_inside: Pre-processed data</h2>
171
<h2 id="preprocessed-data"> :diamond_shape_with_a_dot_inside: Pre-processed data</h2>
186
172
187
<p align="justify"> 
173
<p align="justify"> 
188
  Weiss et.al used windowing technique with window size of 10 seconds to extract statistical features. They extracted 93 features out of which 43 were used to train their models. We also used the same 43 features to train our SVM and CNN. The 43 features are 1. average sensor value 2. standard deviation 3. absolute difference 4. average resultant acceleration 5. Binned distribution (10 equal sized bins per axis) and 5. time between peaks, for each axis.
174
  Weiss et.al used windowing technique with window size of 10 seconds to extract statistical features. They extracted 93 features out of which 43 were used to train their models. We also used the same 43 features to train our SVM and CNN. The 43 features are 1. average sensor value 2. standard deviation 3. absolute difference 4. average resultant acceleration 5. Binned distribution (10 equal sized bins per axis) and 5. time between peaks, for each axis.
189
</p>
175
</p>
190
176
191
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/rainbow.png)
177
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/rainbow.png?raw=true)
192
178
193
<!-- STATISTICAL FEATURE -->
179
<!-- STATISTICAL FEATURE -->
194
<h2 id="statistical-feature"> :large_orange_diamond: Statistical feature</h2>
180
<h2 id="statistical-feature"> :large_orange_diamond: Statistical feature</h2>
195
181
196
<p align="justify"> 
182
<p align="justify"> 
197
  For this approach, we segmented the dataset using 10 second window size (200 datapoints) with no overlapping. We decided to keep the window size same as whatWeiss et.al. applied in their study, for the sake of comparison. After segmentation, for each segment we calculated eight statistical features, namely, ‘min’, ‘max’, ‘mean’, ‘standard deviation’, ‘median’, ‘variance’, ‘zero crossing’ and ‘mean crossing’, for each axes. The zero and mean crossing features are calculated by counting the rate of when a signal passes line y=0 (if we let y-axis to be the specific measurement and x-axis to represent time) and the frequency at which the signal passes the line y = mean(signal), respectively. However, these two features did not show a significant difference between different activities, so we decided to ignore them.
183
  For this approach, we segmented the dataset using 10 second window size (200 datapoints) with no overlapping. We decided to keep the window size same as whatWeiss et.al. applied in their study, for the sake of comparison. After segmentation, for each segment we calculated eight statistical features, namely, ‘min’, ‘max’, ‘mean’, ‘standard deviation’, ‘median’, ‘variance’, ‘zero crossing’ and ‘mean crossing’, for each axes. The zero and mean crossing features are calculated by counting the rate of when a signal passes line y=0 (if we let y-axis to be the specific measurement and x-axis to represent time) and the frequency at which the signal passes the line y = mean(signal), respectively. However, these two features did not show a significant difference between different activities, so we decided to ignore them.
198
</p>
184
</p>
199
185
200
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/rainbow.png)
186
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/rainbow.png?raw=true)
201
187
202
<!-- TOPOLOGICAL FEATURE -->
188
<!-- TOPOLOGICAL FEATURE -->
203
<h2 id="topological-feature"> :large_blue_diamond: Topological feature</h2>
189
<h2 id="topological-feature"> :large_blue_diamond: Topological feature</h2>
204
190
205
<p align="justify"> 
191
<p align="justify"> 
206
  Topological data analysis provides various techniques toexplore the topological properties and shape of the data.
192
  Topological data analysis provides various techniques toexplore the topological properties and shape of the data.
207
  Since time series sensor data obtained from performing an activity may have topological properties, we tried extracting features using the topology of the data and perform the classification task on those features. For a given time segment we explore the topology of each segment using persistence diagram generated via persistence homology. Persistent homology can be created through filtrations such as Vietoris- Rips, SparseRips, Cubical Persistence etc., on the data and capture the growth, birth, and death of different topological features across dimensions (e.g., components, tunnels, voids) [2]. One of the main challenges in computing the persistent homology is finding the appropriate filtration for the time segments. In total 18 topological features where extracted for each time segment.
193
  Since time series sensor data obtained from performing an activity may have topological properties, we tried extracting features using the topology of the data and perform the classification task on those features. For a given time segment we explore the topology of each segment using persistence diagram generated via persistence homology. Persistent homology can be created through filtrations such as Vietoris- Rips, SparseRips, Cubical Persistence etc., on the data and capture the growth, birth, and death of different topological features across dimensions (e.g., components, tunnels, voids) [2]. One of the main challenges in computing the persistent homology is finding the appropriate filtration for the time segments. In total 18 topological features where extracted for each time segment.
208
</p>
194
</p>
209
195
210
<!-- EXPERIMENTS -->
196
<!-- EXPERIMENTS -->
211
<!--<h2 id="experiments"> :microscope: Experiments</h2>-->
197
<!--<h2 id="experiments"> :microscope: Experiments</h2>-->
212
198
213
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/rainbow.png)
199
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/rainbow.png?raw=true)
214
200
215
<!-- RESULTS AND DISCUSSION -->
201
<!-- RESULTS AND DISCUSSION -->
216
<h2 id="results-and-discussion"> :mag: Results and Discussion</h2>
202
<h2 id="results-and-discussion"> :mag: Results and Discussion</h2>
217
203
218
<p align="justify">
204
<p align="justify">
219
  The overall accuracy score of personal and impersonal models are shown in the following tables. Some of the results we observed are similar to the results obtained by Weiss et.al and they are discussed below: <br>
205
  The overall accuracy score of personal and impersonal models are shown in the following tables. Some of the results we observed are similar to the results obtained by Weiss et.al and they are discussed below: <br>
220
</p>
206
</p>
221
<p align="justify">
207
<p align="justify">
222
<ul>
208
<ul>
223
  <li>
209
  <li>
224
    Since accelerometers senses acceleration based on vibration which can be more prominent during an activity and gyroscope only senses rotational changes, accelerometers outperformed gyroscope in all our models. <br>
210
    Since accelerometers senses acceleration based on vibration which can be more prominent during an activity and gyroscope only senses rotational changes, accelerometers outperformed gyroscope in all our models. <br>
225
  </li>
211
  </li>
226
  <li>
212
  <li>
227
    As the style of performing an activity differs from each person, it is difficult to aggregate those features among all subjects. So our personal models vastly outperformed our impersonal models.
213
    As the style of performing an activity differs from each person, it is difficult to aggregate those features among all subjects. So our personal models vastly outperformed our impersonal models.
228
  </li>
214
  </li>
229
  <li>
215
  <li>
230
    It is also observed that non hand-oriented activities are classified better with sensors from smartphone and handoriented activities are classified better with sensors from smartwatch. Refer appendix for activity wise recall scores. Some key take-aways based on our results are listed below:
216
    It is also observed that non hand-oriented activities are classified better with sensors from smartphone and handoriented activities are classified better with sensors from smartwatch. Refer appendix for activity wise recall scores. Some key take-aways based on our results are listed below:
231
  </li>
217
  </li>
232
  <li>
218
  <li>
233
    CNN trained on raw sensor data performed better for personal models, however it performed poorly on impersonal models.
219
    CNN trained on raw sensor data performed better for personal models, however it performed poorly on impersonal models.
234
  </li>
220
  </li>
235
</ul>
221
</ul>
236
</p>
222
</p>
237
223
238
<p align="center">
224
239
  <img src="images/Personal and Impersonal Table.png" alt="Table 3 and 4" width="75%" height="75%">
225
240
</p>
241
242
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/rainbow.png)
226
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/rainbow.png?raw=true)
243
227
244
<!-- REFERENCES -->
228
<!-- REFERENCES -->
245
<h2 id="references"> :books: References</h2>
229
<h2 id="references"> :books: References</h2>
246
230
247
<ul>
231
<ul>
248
  <li>
232
  <li>
249
    <p>Matthew B. Kennel, Reggie Brown, and Henry D. I. Abarbanel. Determining embedding dimension for phase-space reconstruction using a geometrical construction. Phys. Rev. A, 45:3403–3411, Mar 1992.
233
    <p>Matthew B. Kennel, Reggie Brown, and Henry D. I. Abarbanel. Determining embedding dimension for phase-space reconstruction using a geometrical construction. Phys. Rev. A, 45:3403–3411, Mar 1992.
250
    </p>
234
    </p>
251
  </li>
235
  </li>
252
  <li>
236
  <li>
253
    <p>
237
    <p>
254
      L. M. Seversky, S. Davis, and M. Berger. On time-series topological data analysis: New data and opportunities. In 2016 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pages 1014–1022, 2016.
238
      L. M. Seversky, S. Davis, and M. Berger. On time-series topological data analysis: New data and opportunities. In 2016 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pages 1014–1022, 2016.
255
    </p>
239
    </p>
256
  </li>
240
  </li>
257
  <li>
241
  <li>
258
    <p>
242
    <p>
259
      Floris Takens. Detecting strange attractors in turbulence. In David Rand and Lai-Sang Young, editors, Dynamical Systems and Turbulence, Warwick 1980, pages 366–381, Berlin, Heidelberg, 1981. Springer Berlin Heidelberg.
243
      Floris Takens. Detecting strange attractors in turbulence. In David Rand and Lai-Sang Young, editors, Dynamical Systems and Turbulence, Warwick 1980, pages 366–381, Berlin, Heidelberg, 1981. Springer Berlin Heidelberg.
260
    </p>
244
    </p>
261
  </li>
245
  </li>
262
  <li>
246
  <li>
263
    <p>
247
    <p>
264
      Guillaume Tauzin, Umberto Lupo, Lewis Tunstall, Julian Burella P´erez, Matteo Caorsi, Anibal Medina-Mardones, Alberto Dassatti, and Kathryn Hess. giotto-tda: A topological data analysis toolkit for machine learning and data exploration, 2020.
248
      Guillaume Tauzin, Umberto Lupo, Lewis Tunstall, Julian Burella P´erez, Matteo Caorsi, Anibal Medina-Mardones, Alberto Dassatti, and Kathryn Hess. giotto-tda: A topological data analysis toolkit for machine learning and data exploration, 2020.
265
    </p>
249
    </p>
266
  </li>
250
  </li>
267
  <li>
251
  <li>
268
    <p>
252
    <p>
269
      G. M. Weiss and A. E. O’Neill. Smartphone and smartwatchbased activity recognition. Jul 2019.
253
      G. M. Weiss and A. E. O’Neill. Smartphone and smartwatchbased activity recognition. Jul 2019.
270
    </p>
254
    </p>
271
  </li>
255
  </li>
272
  <li>
256
  <li>
273
    <p>
257
    <p>
274
      G. M. Weiss, K. Yoneda, and T. Hayajneh. Smartphone and smartwatch-based biometrics using activities of daily living. IEEE Access, 7:133190–133202, 2019.
258
      G. M. Weiss, K. Yoneda, and T. Hayajneh. Smartphone and smartwatch-based biometrics using activities of daily living. IEEE Access, 7:133190–133202, 2019.
275
    </p>
259
    </p>
276
  </li>
260
  </li>
277
  <li>
261
  <li>
278
    <p>
262
    <p>
279
      Jian-Bo Yang, Nguyen Nhut, Phyo San, Xiaoli li, and Priyadarsini Shonali. Deep convolutional neural networks on multichannel time series for human activity recognition. IJCAI, 07 2015.
263
      Jian-Bo Yang, Nguyen Nhut, Phyo San, Xiaoli li, and Priyadarsini Shonali. Deep convolutional neural networks on multichannel time series for human activity recognition. IJCAI, 07 2015.
280
    </p>
264
    </p>
281
  </li>
265
  </li>
282
</ul>
266
</ul>
283
267
284
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/rainbow.png)
268
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/rainbow.png?raw=true)
285
269
286
<!-- CONTRIBUTORS -->
270
<!-- CONTRIBUTORS -->
287
<h2 id="contributors"> :scroll: Contributors</h2>
271
<h2 id="contributors"> :scroll: Contributors</h2>
288
272
289
<p>
273
<p>
290
  :mortar_board: <i>All participants in this project are graduate students in the <a href="https://www.concordia.ca/ginacody/computer-science-software-eng.html">Department of Computer Science and Software Engineering</a> <b>@</b> <a href="https://www.concordia.ca/">Concordia University</a></i> <br> <br>
274
  :mortar_board: <i>All participants in this project are graduate students in the <a href="https://www.concordia.ca/ginacody/computer-science-software-eng.html">Department of Computer Science and Software Engineering</a> <b>@</b> <a href="https://www.concordia.ca/">Concordia University</a></i> <br> <br>
291
  
275
  
292
  :woman: <b>Divya Bhagavathiappan Shiva</b> <br>
276
  :woman: <b>Divya Bhagavathiappan Shiva</b> <br>
293
  &nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Email: <a>divya.bhagavathiappanshiva@mail.concordia.ca</a> <br>
277
  &nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Email: <a>divya.bhagavathiappanshiva@mail.concordia.ca</a> <br>
294
  &nbsp;&nbsp;&nbsp;&nbsp;&nbsp; GitHub: <a href="https://github.com/divyabhagavathiappan">@divyabhagavathiappan</a> <br>
278
  &nbsp;&nbsp;&nbsp;&nbsp;&nbsp; GitHub: <a href="https://github.com/divyabhagavathiappan">@divyabhagavathiappan</a> <br>
295
  
279
  
296
  :woman: <b>Reethu Navale</b> <br>
280
  :woman: <b>Reethu Navale</b> <br>
297
  &nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Email: <a>reethu.navale@mail.concordia.ca</a> <br>
281
  &nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Email: <a>reethu.navale@mail.concordia.ca</a> <br>
298
  &nbsp;&nbsp;&nbsp;&nbsp;&nbsp; GitHub: <a href="https://github.com/reethunavale">@reethunavale</a> <br>
282
  &nbsp;&nbsp;&nbsp;&nbsp;&nbsp; GitHub: <a href="https://github.com/reethunavale">@reethunavale</a> <br>
299
283
300
  :woman: <b>Mahsa Sadat Afzali Arani</b> <br>
284
  :woman: <b>Mahsa Sadat Afzali Arani</b> <br>
301
  &nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Email: <a>m_afzali93@yahoo.com</a> <br>
285
  &nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Email: <a>m_afzali93@yahoo.com</a> <br>
302
  &nbsp;&nbsp;&nbsp;&nbsp;&nbsp; GitHub: <a href="https://github.com/MahsaAfzali">@MahsaAfzali</a> <br>
286
  &nbsp;&nbsp;&nbsp;&nbsp;&nbsp; GitHub: <a href="https://github.com/MahsaAfzali">@MahsaAfzali</a> <br>
303
287
304
  :boy: <b>Mohammad Amin Shamshiri</b> <br>
288
  :boy: <b>Mohammad Amin Shamshiri</b> <br>
305
  &nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Email: <a>mohammadamin.shamshiri@mail.concordia.ca</a> <br>
289
  &nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Email: <a>mohammadamin.shamshiri@mail.concordia.ca</a> <br>
306
  &nbsp;&nbsp;&nbsp;&nbsp;&nbsp; GitHub: <a href="https://github.com/ma-shamshiri">@ma-shamshiri</a> <br>
290
  &nbsp;&nbsp;&nbsp;&nbsp;&nbsp; GitHub: <a href="https://github.com/ma-shamshiri">@ma-shamshiri</a> <br>
307
</p>
291
</p>
308
292
309
<br>
293
<br>
310
✤ <i>This was the final project for the course COMP 6321 - Machine Learning (Fall 2020), at <a href="https://www.concordia.ca/">Concordia University</a><i>
294
✤ <i>This was the final project for the course COMP 6321 - Machine Learning (Fall 2020), at <a href="https://www.concordia.ca/">Concordia University</a><i>