Switch to unified view

a/README.md b/README.md
...
...
6
6
7
The system receives video as input, scans each frame of the video, and then creates 17 key-points for each individual, each of which corresponds to a position of that person's body parts in that frame. This is done using the [YOLOv7-POSE](https://github.com/WongKinYiu/yolov7/tree/pose "YOLOv7-POSE") model.
7
The system receives video as input, scans each frame of the video, and then creates 17 key-points for each individual, each of which corresponds to a position of that person's body parts in that frame. This is done using the [YOLOv7-POSE](https://github.com/WongKinYiu/yolov7/tree/pose "YOLOv7-POSE") model.
8
8
9
For example
9
For example
10
10
11
![](https://github.com/bakshtb/Human-Fall-Detection/blob/master/Mydata/keypoints-example.png)
11
![](https://github.com/bakshtb/Human-Fall-Detection/blob/master/Mydata/keypoints-example.png?raw=true)
12
12
13
You can learn more about YOLOv7-POSE by reading this [document](https://arxiv.org/ftp/arxiv/papers/2204/2204.06806.pdf "document").
13
You can learn more about YOLOv7-POSE by reading this [document](https://arxiv.org/ftp/arxiv/papers/2204/2204.06806.pdf "document").
14
14
15
We initially created an LSTM-based neural network that learned from a database of photos of humans that were categorized as "falling" and "not falling."
15
We initially created an LSTM-based neural network that learned from a database of photos of humans that were categorized as "falling" and "not falling."
16
We obtained roughly 500 images from the Internet for this purpose.
16
We obtained roughly 500 images from the Internet for this purpose.