|
a |
|
b/README.md |
|
|
1 |
## RoCoSDF: Row-Column Scanned Neural Signed Distance Fields for Freehand 3D ultrasound Imaging Shape Reconstruction |
|
|
2 |
-------------------------------------- |
|
|
3 |
|
|
|
4 |
The official implementation code for MICCAI 2024 paper: |
|
|
5 |
[RoCoSDF: Row-Column Scanned Neural Signed Distance Fields for Freehand 3D Ultrasound Imaging Shape Reconstruction](https://chenhbo.github.io/RoCoSDF/) |
|
|
6 |
by [Hongbo Chen](https://chenhbo.github.io/), Yuchong Gao, Shuhang Zhang, Jiangjie Wu, [Yuexin Ma](https://yuexinma.me/) and [Rui Zheng](https://sist.shanghaitech.edu.cn/zhengrui_en/main.htm). |
|
|
7 |
|
|
|
8 |
|
|
|
9 |
RoCoSDF is a framework built on neural implicit signed distance functions for shape reconstruction of multi-view freehand 3D ultrasound imaging. |
|
|
10 |
|
|
|
11 |
<div align="center"> |
|
|
12 |
<img src="img/Fig_RoCoScan.png" style="zoom:15%" alt="Data Aquisition Protocol"/> |
|
|
13 |
</div> |
|
|
14 |
|
|
|
15 |
|
|
|
16 |
|
|
|
17 |
|
|
|
18 |
## Demo |
|
|
19 |
* Thoracic Vertebra T4 from ultrasound transducer 1 (UT1) |
|
|
20 |
|
|
|
21 |
<div align="center"> |
|
|
22 |
<img src="img/Fig_Result_T4.png" style="zoom:14.7%" alt="Framework"/> |
|
|
23 |
</div> |
|
|
24 |
|
|
|
25 |
|
|
|
26 |
<br /> |
|
|
27 |
|
|
|
28 |
|
|
|
29 |
* The example mesh results of RoCoSDF are in `outs/T4_RoCo/outputs/*.ply`. |
|
|
30 |
|
|
|
31 |
|
|
|
32 |
-------------------------------------- |
|
|
33 |
|
|
|
34 |
## Usage |
|
|
35 |
Our code is implemented in NVIDIA 3090, Ubuntu 18/20, Python 3.8, PyTorch 1.12.1 and CUDA 11.6. |
|
|
36 |
|
|
|
37 |
|
|
|
38 |
### Install Dependencies |
|
|
39 |
For 20/30x GPU: |
|
|
40 |
``` |
|
|
41 |
conda create -n rocosdf python=3.8 |
|
|
42 |
conda activate rocosdf |
|
|
43 |
conda install pytorch==1.12.1 torchvision==0.13.1 torchaudio==0.12.1 cudatoolkit=11.6 -c pytorch -c conda-forge |
|
|
44 |
pip install tqdm pyhocon==0.3.57 trimesh PyMCubes scipy matplotlib |
|
|
45 |
pip install visdom open3d scikit-image plyfile |
|
|
46 |
``` |
|
|
47 |
|
|
|
48 |
For 40x GPU, cuda 11.8: |
|
|
49 |
``` |
|
|
50 |
conda create -n rocosdf python=3.10 |
|
|
51 |
conda activate rocosdf |
|
|
52 |
pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118 |
|
|
53 |
pip install tqdm pyhocon==0.3.57 trimesh PyMCubes scipy matplotlib |
|
|
54 |
pip install visdom open3d scikit-image plyfile |
|
|
55 |
``` |
|
|
56 |
-------------------------------------- |
|
|
57 |
|
|
|
58 |
### Data Preparation |
|
|
59 |
- Convert the row-scan and column-scan segmented volumetric mask to point clouds file *.ply. |
|
|
60 |
|
|
|
61 |
- Both the row-scan and column-scan point clouds should be in a same tracking |
|
|
62 |
space or manually aligned in a unified space. |
|
|
63 |
|
|
|
64 |
- Put the row-scan and column-scan point clouds data in ./data. |
|
|
65 |
|
|
|
66 |
``` |
|
|
67 |
RoCoSDF/ |
|
|
68 |
│ |
|
|
69 |
├── data/ |
|
|
70 |
│ ├── T4_Co.ply % your own data |
|
|
71 |
| ├── T4_Ro.ply % your own data |
|
|
72 |
│ ├── T4_Co_ds.pt % generated during data preprocessing, downsampled point clouds for training |
|
|
73 |
| ├── T4_Ro_ds.pt % generated during data preprocessing, downsampled point clouds for training |
|
|
74 |
│ ├── T4_Co_sampler.pt % generated during training |
|
|
75 |
| ├── T4_Ro_sampler.pt % generated during training |
|
|
76 |
| |
|
|
77 |
| |
|
|
78 |
├── outs/ |
|
|
79 |
│ ├── T4_Co/ |
|
|
80 |
│ └── outputs/ |
|
|
81 |
| └── *.ply |
|
|
82 |
│ ├── T4_Ro/ |
|
|
83 |
│ └── outputs/ |
|
|
84 |
| └── *.ply |
|
|
85 |
│ ├── T4_RoCo/ |
|
|
86 |
│ └── outputs/ |
|
|
87 |
| └── *.ply |
|
|
88 |
| |
|
|
89 |
``` |
|
|
90 |
|
|
|
91 |
-------------------------------------- |
|
|
92 |
|
|
|
93 |
### Run RoCoSDF |
|
|
94 |
In Linux, directly train the model through `sh train.sh` OR using command as below. |
|
|
95 |
|
|
|
96 |
``` |
|
|
97 |
python runRoCoSDF.py --gpu 0 --conf confs/conf.conf --dataname T4_Co --dataname2 T4_Ro --dir T4_Co --dir2 T4_Ro --dir3 T4_RoCo --mode train |
|
|
98 |
``` |
|
|
99 |
|
|
|
100 |
|
|
|
101 |
### Run SDF Refinement Only |
|
|
102 |
In Linux, directly train the model through `sh train_refine_only.sh` OR using command as below. |
|
|
103 |
|
|
|
104 |
``` |
|
|
105 |
python runRoCoSDF.py --gpu 0 --conf confs/conf.conf --dataname T4_Co --dataname2 T4_Ro --dir T4_Co --dir2 T4_Ro --dir3 T4_RoCo --mode train_refine |
|
|
106 |
``` |
|
|
107 |
|
|
|
108 |
### Contact |
|
|
109 |
For any queries, please contact [chenhb[at]shanghaitech.edu.cn](mailto:chenhb@shanghaitech.edu.cn). |
|
|
110 |
|
|
|
111 |
### Citation |
|
|
112 |
If you use RoCoSDF in your research, please cite the paper: |
|
|
113 |
|
|
|
114 |
``` |
|
|
115 |
@InProceedings{chenRoCoSDF, |
|
|
116 |
author="Chen, Hongbo |
|
|
117 |
and Gao, Yuchong |
|
|
118 |
and Zhang, Shuhang |
|
|
119 |
and Wu, Jiangjie |
|
|
120 |
and Ma, Yuexin |
|
|
121 |
and Zheng, Rui", |
|
|
122 |
title="RoCoSDF: Row-Column Scanned Neural Signed Distance Fields for Freehand 3D Ultrasound Imaging Shape Reconstruction", |
|
|
123 |
booktitle="Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024", |
|
|
124 |
year="2024", |
|
|
125 |
publisher="Springer Nature Switzerland", |
|
|
126 |
address="Cham", |
|
|
127 |
pages="721--731", |
|
|
128 |
isbn="978-3-031-72083-3" |
|
|
129 |
} |
|
|
130 |
``` |
|
|
131 |
-------------------------------------- |
|
|
132 |
|
|
|
133 |
### References |
|
|
134 |
The reference codes are from the following links. |
|
|
135 |
We appreciate all the contributors. |
|
|
136 |
|
|
|
137 |
* FUNSR(UNSR in the proceeding): https://github.com/chenhbo/FUNSR |
|
|
138 |
|
|
|
139 |
* DeepSDF: https://github.com/facebookresearch/DeepSDF |
|
|
140 |
|
|
|
141 |
* NeuralPull: https://github.com/mabaorui/NeuralPull-Pytorch |
|
|
142 |
|
|
|
143 |
* GenSDF: https://github.com/princeton-computational-imaging/gensdf |
|
|
144 |
|
|
|
145 |
* CSGSDF: https://github.com/zoemarschner/csg_on_nsdf |