Switch to unified view

a/README.md b/README.md
1
<h1 align="center">Localizing Scan Targets from Human Pose for Autonomous Lung Ultrasound Imaging</h1>
1
<h1 align="center">Localizing Scan Targets from Human Pose for Autonomous Lung Ultrasound Imaging</h1>
2
2
3
<h4 align="center">This is the official repository of the paper <a href="http://arxiv.org/abs/2212.07867">Localizing Scan Targets from Human Pose for Autonomous Lung Ultrasound Imaging</a>.</h4>
3
<h4 align="center">This is the official repository of the paper <a href="http://arxiv.org/abs/2212.07867">Localizing Scan Targets from Human Pose for Autonomous Lung Ultrasound Imaging</a>.</h4>
4
<h5 align="center"><em>Jianzhi Long<sup>1,2&#8727;</sup>, Jicang Cai<sup>2&#8727;</sup>, Abdullah F. Al-Battal<sup>2</sup>, Shiwei Jin<sup>2</sup>, Jing Zhang<sup>1</sup>, Dacheng Tao<sup>1,3</sup>, Imanuel Lerman<sup>2</sup>, Truong Nguyen<sup>2</sup></em></h5>
4
<h5 align="center"><em>Jianzhi Long<sup>1,2&#8727;</sup>, Jicang Cai<sup>2&#8727;</sup>, Abdullah F. Al-Battal<sup>2</sup>, Shiwei Jin<sup>2</sup>, Jing Zhang<sup>1</sup>, Dacheng Tao<sup>1,3</sup>, Imanuel Lerman<sup>2</sup>, Truong Nguyen<sup>2</sup></em></h5>
5
<h6 align="center">1 The University of Sydney, Australia; 2 University of California San Diego, USA; 3 JD Explore Academy, Beijing, China </h6>
5
<h6 align="center">1 The University of Sydney, Australia; 2 University of California San Diego, USA; 3 JD Explore Academy, Beijing, China </h6>
6
6
7
<p align="center">
7
<p align="center">
8
  <a href="#introduction">Introduction</a> |
8
  <a href="#introduction">Introduction</a> |
9
  <a href="#scan-targets">Scan Targets</a> |
9
  <a href="#scan-targets">Scan Targets</a> |
10
  <a href="#system-setup">System Setup</a> |
10
  <a href="#system-setup">System Setup</a> |
11
  <a href="#pipeline">Pipeline</a> |
11
  <a href="#pipeline">Pipeline</a> |
12
  <a href="#running-the-code">Run Code</a> |
12
  <a href="#running-the-code">Run Code</a> |
13
  <a href="#demo-video">Demo Video</a> |
13
  <a href="#demo-video">Demo Video</a> |
14
  <a href="#installation">Installation</a> |
14
  <a href="#installation">Installation</a> |
15
  <a href="#contact-info">Contact Info</a> |
15
  <a href="#contact-info">Contact Info</a> |
16
  <a href="#acknowledge">Acknowledge</a> |
16
  <a href="#acknowledge">Acknowledge</a> |
17
  <a href="#statement">Statement</a>
17
  <a href="#statement">Statement</a>
18
</p>
18
</p>
19
19
20
## Introduction
20
## Introduction
21
This repository contains the code, experiment results and a video demo for the paper Localizing Scan Targets from Human Pose for
21
This repository contains the code, experiment results and a video demo for the paper Localizing Scan Targets from Human Pose for
22
Autonomous Lung Ultrasound Imaging. Scan target localization is defined as moving the Ultrasound (US) transducer probe to the proximity of the target scan location. We combined a human pose estimation model with a specially designed interpolation model to predict the lung ultrasound scan targets, while multi-view stereo vision is deployed to enhance the accuracy of 3D target localization.
22
Autonomous Lung Ultrasound Imaging. Scan target localization is defined as moving the Ultrasound (US) transducer probe to the proximity of the target scan location. We combined a human pose estimation model with a specially designed interpolation model to predict the lung ultrasound scan targets, while multi-view stereo vision is deployed to enhance the accuracy of 3D target localization.
23
23
24
We have released the code for [implementation](src) of our proposed [pipeline](#Pipeline) with the [system setup](#system-setup) shown below, as well as the [evaluation](src/evaluation) of the system performance. We also included a short [video demo](#demo-video) of localizing the scan target on a human subject to show the system in action.  
24
We have released the code for [implementation](src) of our proposed [pipeline](#Pipeline) with the [system setup](#system-setup) shown below, as well as the [evaluation](src/evaluation) of the system performance. We also included a short [video demo](#demo-video) of localizing the scan target on a human subject to show the system in action.  
25
25
26
## Scan Targets:
26
## Scan Targets:
27
<img src="homepage/target_scan_locations.png" width="60%"/>
27
<img src="https://github.com/JamesLong199/lung-ultrasound-scan-target-localization/blob/main/homepage/target_scan_locations.png?raw=true" width="60%"/>
28
In our project, we focus on localizing scan targets 1, 2, and 4.
28
In our project, we focus on localizing scan targets 1, 2, and 4.
29
29
30
## System Setup:
30
## System Setup:
31
<img src='homepage/apparatus.png' width="60%" />
31
<img src='https://github.com/JamesLong199/lung-ultrasound-scan-target-localization/blob/main/homepage/apparatus.png?raw=true' width="60%" />
32
32
33
## Pipeline:
33
## Pipeline:
34
<img src='homepage/pipeline.png' width="100%" />
34
<img src='https://github.com/JamesLong199/lung-ultrasound-scan-target-localization/blob/main/homepage/pipeline.png?raw=true' width="100%" />
35
35
36
## Running the Code
36
## Running the Code
37
Detailed instructions of running the code are included in other `README.md` files:
37
Detailed instructions of running the code are included in other `README.md` files:
38
- To perform one scanning trial, see <a href="https://github.com/JamesLong199/Autonomous-Transducer-Project/tree/main/src">`src/README.md`</a>.
38
- To perform one scanning trial, see <a href="https://github.com/JamesLong199/Autonomous-Transducer-Project/tree/main/src">`src/README.md`</a>.
39
- To evaluate results, see <a href="https://github.com/JamesLong199/Autonomous-Transducer-Project/tree/main/src/evaluation">`src/evaluation/README.md`</a>.
39
- To evaluate results, see <a href="https://github.com/JamesLong199/Autonomous-Transducer-Project/tree/main/src/evaluation">`src/evaluation/README.md`</a>.
40
40
41
## Demo Video 
41
## Demo Video 
42
https://user-images.githubusercontent.com/60713478/221347137-9e76d059-1eaa-453e-aa6f-1683a4696ee8.mp4
42
https://user-images.githubusercontent.com/60713478/221347137-9e76d059-1eaa-453e-aa6f-1683a4696ee8.mp4
43
43
44
## Installation
44
## Installation
45
45
46
1. Clone this repository
46
1. Clone this repository
47
47
48
    `git clone https://github.com/JamesLong199/Autonomous-Transducer-Project.git`
48
    `git clone https://github.com/JamesLong199/Autonomous-Transducer-Project.git`
49
49
50
2. Go into the repository
50
2. Go into the repository
51
51
52
    `cd Autonomous-Transducer-Project`
52
    `cd Autonomous-Transducer-Project`
53
53
54
3. Create conda environment and activate
54
3. Create conda environment and activate
55
55
56
    `conda create -n Auto_US python=3.7`
56
    `conda create -n Auto_US python=3.7`
57
57
58
    `conda activate Auto_US`
58
    `conda activate Auto_US`
59
59
60
4. Install dependencies
60
4. Install dependencies
61
61
62
    `pip install -r requirements.txt`
62
    `pip install -r requirements.txt`
63
63
64
5. Download [ViTPose](https://github.com/ViTAE-Transformer/ViTPose) models and use the corresponding config file. Place the models in `ViTPose/models`. We use 
64
5. Download [ViTPose](https://github.com/ViTAE-Transformer/ViTPose) models and use the corresponding config file. Place the models in `ViTPose/models`. We use 
65
   1. ViTPose-L (COCO+AIC+MPII+CrowdPose)
65
   1. ViTPose-L (COCO+AIC+MPII+CrowdPose)
66
   2. ViTPose-B (classic decoder, COCO)
66
   2. ViTPose-B (classic decoder, COCO)
67
67
68
6. Download detector model for ViTPose. Place the model in `ViTPose/models`. We use
68
6. Download detector model for ViTPose. Place the model in `ViTPose/models`. We use
69
   1. [YOLOv3](https://github.com/open-mmlab/mmdetection/tree/master/configs/yolo) (DarkNet-53, 320, 273e)
69
   1. [YOLOv3](https://github.com/open-mmlab/mmdetection/tree/master/configs/yolo) (DarkNet-53, 320, 273e)
70
70
71
7. To use OpenPose, follow the instructions on the [official OpenPose documentations](https://github.com/CMU-Perceptual-Computing-Lab/openpose/blob/master/doc/installation/0_index.md) to download the Windows Portable Demo. Place the `openpose` folder in the project's root directory.
71
7. To use OpenPose, follow the instructions on the [official OpenPose documentations](https://github.com/CMU-Perceptual-Computing-Lab/openpose/blob/master/doc/installation/0_index.md) to download the Windows Portable Demo. Place the `openpose` folder in the project's root directory.
72
72
73
Our code has been tested with Python 3.7.11 on Windows 11.
73
Our code has been tested with Python 3.7.11 on Windows 11.
74
74
75
## Contact Info
75
## Contact Info
76
| Name  | Email |
76
| Name  | Email |
77
| ------------- | ------------- |
77
| ------------- | ------------- |
78
| Jianzhi Long  | jlong@ucsd.edu |
78
| Jianzhi Long  | jlong@ucsd.edu |
79
| Jicang Cai  | j1cai@ucsd.edu  |
79
| Jicang Cai  | j1cai@ucsd.edu  |
80
80
81
## Acknowledge
81
## Acknowledge
82
We acknowledge the excellent implementation from [ViTPose](https://github.com/ViTAE-Transformer/ViTPose), [OpenPose](https://github.com/CMU-Perceptual-Computing-Lab/openpose), and Rope Robotics (Denmark).
82
We acknowledge the excellent implementation from [ViTPose](https://github.com/ViTAE-Transformer/ViTPose), [OpenPose](https://github.com/CMU-Perceptual-Computing-Lab/openpose), and Rope Robotics (Denmark).
83
83
84
## Statement
84
## Statement
85
Will become available once the paper is on arxiv.
85
Will become available once the paper is on arxiv.