Card

Localizing Scan Targets from Human Pose for Autonomous Lung Ultrasound Imaging

This is the official repository of the paper Localizing Scan Targets from Human Pose for Autonomous Lung Ultrasound Imaging.

Jianzhi Long1,2∗, Jicang Cai2∗, Abdullah F. Al-Battal2, Shiwei Jin2, Jing Zhang1, Dacheng Tao1,3, Imanuel Lerman2, Truong Nguyen2
1 The University of Sydney, Australia; 2 University of California San Diego, USA; 3 JD Explore Academy, Beijing, China

Introduction | Scan Targets | System Setup | Pipeline | Run Code | Demo Video | Installation | Contact Info | Acknowledge | Statement

Introduction

This repository contains the code, experiment results and a video demo for the paper Localizing Scan Targets from Human Pose for
Autonomous Lung Ultrasound Imaging. Scan target localization is defined as moving the Ultrasound (US) transducer probe to the proximity of the target scan location. We combined a human pose estimation model with a specially designed interpolation model to predict the lung ultrasound scan targets, while multi-view stereo vision is deployed to enhance the accuracy of 3D target localization.

We have released the code for implementation of our proposed pipeline with the system setup shown below, as well as the evaluation of the system performance. We also included a short video demo of localizing the scan target on a human subject to show the system in action.

Scan Targets:


In our project, we focus on localizing scan targets 1, 2, and 4.

System Setup:

Pipeline:

Running the Code

Detailed instructions of running the code are included in other README.md files:
- To perform one scanning trial, see src/README.md.
- To evaluate results, see src/evaluation/README.md.

Demo Video

https://user-images.githubusercontent.com/60713478/221347137-9e76d059-1eaa-453e-aa6f-1683a4696ee8.mp4

Installation

  1. Clone this repository

    git clone https://github.com/JamesLong199/Autonomous-Transducer-Project.git

  2. Go into the repository

    cd Autonomous-Transducer-Project

  3. Create conda environment and activate

    conda create -n Auto_US python=3.7

    conda activate Auto_US

  4. Install dependencies

    pip install -r requirements.txt

  5. Download ViTPose models and use the corresponding config file. Place the models in ViTPose/models. We use

  6. ViTPose-L (COCO+AIC+MPII+CrowdPose)
  7. ViTPose-B (classic decoder, COCO)

  8. Download detector model for ViTPose. Place the model in ViTPose/models. We use

  9. YOLOv3 (DarkNet-53, 320, 273e)

  10. To use OpenPose, follow the instructions on the official OpenPose documentations to download the Windows Portable Demo. Place the openpose folder in the project's root directory.

Our code has been tested with Python 3.7.11 on Windows 11.

Contact Info

Name Email
Jianzhi Long jlong@ucsd.edu
Jicang Cai j1cai@ucsd.edu

Acknowledge

We acknowledge the excellent implementation from ViTPose, OpenPose, and Rope Robotics (Denmark).

Statement

Will become available once the paper is on arxiv.