Introduction | Scan Targets | System Setup | Pipeline | Run Code | Demo Video | Installation | Contact Info | Acknowledge | Statement
This repository contains the code, experiment results and a video demo for the paper Localizing Scan Targets from Human Pose for
Autonomous Lung Ultrasound Imaging. Scan target localization is defined as moving the Ultrasound (US) transducer probe to the proximity of the target scan location. We combined a human pose estimation model with a specially designed interpolation model to predict the lung ultrasound scan targets, while multi-view stereo vision is deployed to enhance the accuracy of 3D target localization.
We have released the code for implementation of our proposed pipeline with the system setup shown below, as well as the evaluation of the system performance. We also included a short video demo of localizing the scan target on a human subject to show the system in action.
In our project, we focus on localizing scan targets 1, 2, and 4.
Detailed instructions of running the code are included in other README.md
files:
- To perform one scanning trial, see src/README.md
.
- To evaluate results, see src/evaluation/README.md
.
Clone this repository
git clone https://github.com/JamesLong199/Autonomous-Transducer-Project.git
Go into the repository
cd Autonomous-Transducer-Project
Create conda environment and activate
conda create -n Auto_US python=3.7
conda activate Auto_US
Install dependencies
pip install -r requirements.txt
Download ViTPose models and use the corresponding config file. Place the models in ViTPose/models
. We use
ViTPose-B (classic decoder, COCO)
Download detector model for ViTPose. Place the model in ViTPose/models
. We use
YOLOv3 (DarkNet-53, 320, 273e)
To use OpenPose, follow the instructions on the official OpenPose documentations to download the Windows Portable Demo. Place the openpose
folder in the project's root directory.
Our code has been tested with Python 3.7.11 on Windows 11.
Name | |
---|---|
Jianzhi Long | jlong@ucsd.edu |
Jicang Cai | j1cai@ucsd.edu |
We acknowledge the excellent implementation from ViTPose, OpenPose, and Rope Robotics (Denmark).
Will become available once the paper is on arxiv.