|
a |
|
b/README.md |
|
|
1 |
|
|
|
2 |
# Brainchop []() []() [](https://github.com/neuroneural/brainchop/blob/master/LICENSE) [](https://github.com/neuroneural/brainchop/tree/master/models/mnm_tfjs_me_test) [](https://doi.org/10.21105/joss.05098) |
|
|
3 |
|
|
|
4 |
|
|
|
5 |
<div align="center"> |
|
|
6 |
<a href="https://neuroneural.github.io/brainchop"> |
|
|
7 |
<img width="100%" src="https://github.com/neuroneural/brainchop/releases/download/v3.4.0/Banner.png"> |
|
|
8 |
</a> |
|
|
9 |
|
|
|
10 |
|
|
|
11 |
**Frontend For Neuroimaging. Open Source** |
|
|
12 |
|
|
|
13 |
**[brainchop.org](https://neuroneural.github.io/brainchop)   [Updates](#Updates)   [Doc](https://github.com/neuroneural/brainchop/wiki/)   [News!](#News)   [Cite](#Citation)   [v3](https://neuroneural.github.io/brainchop/v3)** |
|
|
14 |
|
|
|
15 |
</div> |
|
|
16 |
|
|
|
17 |
|
|
|
18 |
<br> |
|
|
19 |
<img src="https://github.com/neuroneural/brainchop/blob/master/css/logo/brainchop_logo.png" width="25%" align="right"> |
|
|
20 |
|
|
|
21 |
<p align="justify"> |
|
|
22 |
<b><a href="https://neuroneural.github.io/brainchop/" style="text-decoration: none"> Brainchop</a></b> brings automatic 3D MRI volumetric segmentation capability to neuroimaging by running a lightweight deep learning model (e.g., <a href="https://medium.com/pytorch/catalyst-neuro-a-3d-brain-segmentation-pipeline-for-mri-b1bb1109276a" target="_blank" style="text-decoration: none"> MeshNet</a>) in the web-browser for inference on the user side. |
|
|
23 |
</p> |
|
|
24 |
|
|
|
25 |
<p align="justify"> |
|
|
26 |
We make the implementation of brainchop freely available, releasing its pure javascript code as open-source. The user interface (UI) provides a web-based end-to-end solution for 3D MRI segmentation. <b><a href="v" style="text-decoration: none">NiiVue</a></b> viewer is integrated with the tool for MRI visualization. For more information about Brainchop, please refer to this detailed <b><a href="https://github.com/neuroneural/brainchop/wiki/" style="text-decoration: none">Wiki</a></b> and this <b><a href="https://trendscenter.org/in-browser-3d-mri-segmentation-brainchop-org/" style="text-decoration: none"> Blog</a></b>. |
|
|
27 |
|
|
|
28 |
For questions or to share ideas, please refer to our <b><a href="https://github.com/neuroneural/brainchop/discussions/" style="text-decoration: none"> Discussions </a></b> board. |
|
|
29 |
|
|
|
30 |
</p> |
|
|
31 |
|
|
|
32 |
<div align="center"> |
|
|
33 |
|
|
|
34 |
 |
|
|
35 |
|
|
|
36 |
**Brainchop high-level architecture** |
|
|
37 |
</div> |
|
|
38 |
|
|
|
39 |
|
|
|
40 |
<div align="center"> |
|
|
41 |
|
|
|
42 |
 |
|
|
43 |
|
|
|
44 |
**MeshNet deep learning architecture used for inference with Brainchop** (MeshNet <a href="https://arxiv.org/pdf/1612.00940.pdf" target="_blank" style="text-decoration: none"> paper</a>) |
|
|
45 |
</div> |
|
|
46 |
|
|
|
47 |
|
|
|
48 |
## MeshNet Example |
|
|
49 |
This basic example provides an overview of the training pipeline for the MeshNet model. |
|
|
50 |
|
|
|
51 |
* [](https://colab.research.google.com/github/neuroneural/brainchop/blob/master/py2tfjs/MeshNet_Training_Example.ipynb) [MeshNet basic training example](./py2tfjs/MeshNet_Training_Example.ipynb) |
|
|
52 |
|
|
|
53 |
* [](https://colab.research.google.com/github/neuroneural/brainchop/blob/master/py2tfjs/Convert_Trained_Model_To_TFJS.ipynb) [Convert the trained MeshNet model to tfjs model example ](./py2tfjs/Convert_Trained_Model_To_TFJS.ipynb) |
|
|
54 |
|
|
|
55 |
<br> |
|
|
56 |
|
|
|
57 |
## Live Demo |
|
|
58 |
|
|
|
59 |
To see Brainchop **v4** in action please click [here](https://neuroneural.github.io/brainchop). Or click on the gif below to see a video: |
|
|
60 |
<div align="center"> |
|
|
61 |
|
|
|
62 |
[](https://github.com/neuroneural/brainchop/releases/download/v4.1.0/Brainchop_overhaul.mp4) |
|
|
63 |
</div> |
|
|
64 |
|
|
|
65 |
For **v3** click [here](https://neuroneural.github.io/brainchop/v3). |
|
|
66 |
|
|
|
67 |
<br> |
|
|
68 |
|
|
|
69 |
|
|
|
70 |
|
|
|
71 |
## Updates |
|
|
72 |
|
|
|
73 |
<div align="center"> |
|
|
74 |
|
|
|
75 |
<img src="https://github.com/neuroneural/brainchop/releases/download/v4.0.0/Brainchop_Niivue.png" width="100%"> |
|
|
76 |
|
|
|
77 |
**Brainchop <a href= "https://neuroneural.github.io/brainchop/" target="_blank" style="text-decoration: none"> v4 </a> with <a href= "https://github.com/niivue/niivue" target="_blank" style="text-decoration: none"> NiiVue</a> viewer** |
|
|
78 |
</div> |
|
|
79 |
|
|
|
80 |
<br> |
|
|
81 |
|
|
|
82 |
<div align="center"> |
|
|
83 |
|
|
|
84 |
<img src="https://github.com/neuroneural/brainchop/releases/download/v3.4.0/BrainchopMoreRobustModels.gif" width="60%"> |
|
|
85 |
|
|
|
86 |
**Brainchop <a href= "https://neuroneural.github.io/brainchop/v3" target="_blank" style="text-decoration: none"> v3 </a> with more robust models** |
|
|
87 |
</div> |
|
|
88 |
|
|
|
89 |
<br> |
|
|
90 |
|
|
|
91 |
|
|
|
92 |
<div align="center"> |
|
|
93 |
|
|
|
94 |
 |
|
|
95 |
|
|
|
96 |
**Brainchop <a href= "https://neuroneural.github.io/brainchop/v3" target="_blank" style="text-decoration: none"> v1.4.0 - v3.4.0 </a> rendering MRI Nifti file in 3D** |
|
|
97 |
</div> |
|
|
98 |
|
|
|
99 |
<br> |
|
|
100 |
|
|
|
101 |
<div align="center"> |
|
|
102 |
|
|
|
103 |
 |
|
|
104 |
|
|
|
105 |
|
|
|
106 |
**Brainchop <a href= "https://neuroneural.github.io/brainchop/v3" target="_blank" style="text-decoration: none"> v1.3.0 - v3.4.0 </a> rendering segmentation output in 3D** |
|
|
107 |
</div> |
|
|
108 |
|
|
|
109 |
|
|
|
110 |
|
|
|
111 |
|
|
|
112 |
|
|
|
113 |
## News! |
|
|
114 |
|
|
|
115 |
* Brainchop [v2.2.0](https://github.com/neuroneural/brainchop/releases/tag/v2.2.0) paper is accepted in the 21st IEEE International Symposium on Biomedical Imaging ([ISBI 2024](https://biomedicalimaging.org/2024/)). Lengthy arXiv version can be found [here](https://arxiv.org/abs/2310.16162). |
|
|
116 |
|
|
|
117 |
<div align="center"> |
|
|
118 |
<img src="https://github.com/neuroneural/brainchop/blob/master/css/news/ISBI_2024.jpeg" width="40%"> |
|
|
119 |
</div> |
|
|
120 |
|
|
|
121 |
<br> |
|
|
122 |
<br> |
|
|
123 |
|
|
|
124 |
* Brainchop [paper](https://doi.org/10.21105/joss.05098) is published in the Journal of Open Source Software (JOSS) on March 28, 2023. |
|
|
125 |
|
|
|
126 |
<div align="center"> |
|
|
127 |
<a href="https://doi.org/10.21105/joss.05098"><img src="https://github.com/neuroneural/brainchop/blob/master/css/news/JOSS_Logo.png"></a> |
|
|
128 |
</div> |
|
|
129 |
|
|
|
130 |
<br> |
|
|
131 |
<br> |
|
|
132 |
|
|
|
133 |
* Brainchop abstract is accepted for poster presentation during the 2023 [OHBM](https://www.humanbrainmapping.org/) Annual Meeting. |
|
|
134 |
|
|
|
135 |
<div align="center"> |
|
|
136 |
<img src="https://github.com/neuroneural/brainchop/blob/master/css/news/OHBM_2023.jpeg" width="40%"> |
|
|
137 |
</div> |
|
|
138 |
|
|
|
139 |
<br> |
|
|
140 |
<br> |
|
|
141 |
|
|
|
142 |
* Brainchop 1-page abstract and poster is accepted in 20th IEEE International Symposium on Biomedical Imaging ([ISBI 2023](https://2023.biomedicalimaging.org/en/)) |
|
|
143 |
|
|
|
144 |
<div align="center"> |
|
|
145 |
<img src="https://github.com/neuroneural/brainchop/blob/master/css/news/ISBI_2023.png" width="40%"> |
|
|
146 |
</div> |
|
|
147 |
|
|
|
148 |
<br> |
|
|
149 |
<br> |
|
|
150 |
|
|
|
151 |
* Google, Tensorflow community spotlight award for brainchop (Sept 2022) on [Linkedin](https://www.linkedin.com/posts/tensorflow-community_github-neuroneuralbrainchop-brainchop-activity-6978796859532181504-cfCW?utm_source=share&utm_medium=member_desktop) and [Twitter](https://twitter.com/TensorFlow/status/1572980019999264774) |
|
|
152 |
|
|
|
153 |
<div align="center"> |
|
|
154 |
<img src="https://github.com/neuroneural/brainchop/blob/master/css/news/TF_CommunityAward.png" width="60%"> |
|
|
155 |
</div> |
|
|
156 |
|
|
|
157 |
<br> |
|
|
158 |
<br> |
|
|
159 |
|
|
|
160 |
* Brainchop invited to [Pytorch](https://pytorch.org/ecosystem/ptc/2022) flag conference, New Orleans, Louisiana (Dec 2022) |
|
|
161 |
|
|
|
162 |
<div align="center"> |
|
|
163 |
<img src="https://github.com/neuroneural/brainchop/blob/master/css/news/Pytorch_Poster.jpg" width="50%"> |
|
|
164 |
</div> |
|
|
165 |
|
|
|
166 |
|
|
|
167 |
<br> |
|
|
168 |
<br> |
|
|
169 |
|
|
|
170 |
* Brainchop invited to TensorFlow.js Show & Tell episode #7 (Jul 2022). |
|
|
171 |
|
|
|
172 |
<div align="center"> |
|
|
173 |
<img src="https://github.com/neuroneural/brainchop/blob/master/css/news/TF_show_tell.png" width="50%"> |
|
|
174 |
</div> |
|
|
175 |
|
|
|
176 |
## Citation |
|
|
177 |
|
|
|
178 |
Brainchop [paper](https://doi.org/10.21105/joss.05098) for v2.1.0 is published on March 28, 2023, in the Journal of Open Source Software (JOSS) [](https://doi.org/10.21105/joss.05098) |
|
|
179 |
|
|
|
180 |
|
|
|
181 |
<br> |
|
|
182 |
|
|
|
183 |
For **APA** style, the paper can be **cited** as: |
|
|
184 |
|
|
|
185 |
> Masoud, M., Hu, F., & Plis, S. (2023). Brainchop: In-browser MRI volumetric segmentation and rendering. Journal of Open Source Software, 8(83), 5098. https://doi.org/10.21105/joss.05098 |
|
|
186 |
|
|
|
187 |
<br> |
|
|
188 |
|
|
|
189 |
For **BibTeX** format that is used by some publishers, please use: |
|
|
190 |
|
|
|
191 |
```BibTeX: |
|
|
192 |
@article{Masoud2023, |
|
|
193 |
doi = {10.21105/joss.05098}, |
|
|
194 |
url = {https://doi.org/10.21105/joss.05098}, |
|
|
195 |
year = {2023}, |
|
|
196 |
publisher = {The Open Journal}, |
|
|
197 |
volume = {8}, |
|
|
198 |
number = {83}, |
|
|
199 |
pages = {5098}, |
|
|
200 |
author = {Mohamed Masoud and Farfalla Hu and Sergey Plis}, |
|
|
201 |
title = {Brainchop: In-browser MRI volumetric segmentation and rendering}, |
|
|
202 |
journal = {Journal of Open Source Software} |
|
|
203 |
} |
|
|
204 |
``` |
|
|
205 |
<br> |
|
|
206 |
|
|
|
207 |
For **MLA** style: |
|
|
208 |
|
|
|
209 |
> Masoud, Mohamed, Farfalla Hu, and Sergey Plis. ‘Brainchop: In-Browser MRI Volumetric Segmentation and Rendering’. Journal of Open Source Software, vol. 8, no. 83, The Open Journal, 2023, p. 5098, https://doi.org10.21105/joss.05098. |
|
|
210 |
|
|
|
211 |
<br> |
|
|
212 |
|
|
|
213 |
For **IEEE** style: |
|
|
214 |
|
|
|
215 |
> M. Masoud, F. Hu, and S. Plis, ‘Brainchop: In-browser MRI volumetric segmentation and rendering’, Journal of Open Source Software, vol. 8, no. 83, p. 5098, 2023. doi:10.21105/joss.05098 |
|
|
216 |
|
|
|
217 |
|
|
|
218 |
<br> |
|
|
219 |
|
|
|
220 |
## Contribution and Authorship Guidelines |
|
|
221 |
|
|
|
222 |
If you modify or extend Brainchop in a derivative work intended for publication (such as a research paper or software tool), please cite and acknowledge the original Brainchop project and the original authors. Proper acknowledge should include the following: |
|
|
223 |
|
|
|
224 |
> **"Brainchop, originally developed by Mohamed Masoud and Sergey Plis (2023), was used in the development of this work."** |
|
|
225 |
|
|
|
226 |
We also request that significant contributions to derivative works be recognized by including original authors as co-authors, where appropriate. |
|
|
227 |
|
|
|
228 |
<br> |
|
|
229 |
|
|
|
230 |
## Funding |
|
|
231 |
|
|
|
232 |
This work was funded by the NIH grant RF1MH121885. Additional support from NIH R01MH123610, R01EB006841 and NSF 2112455. |
|
|
233 |
|
|
|
234 |
<br /> |
|
|
235 |
<div align="center"> |
|
|
236 |
|
|
|
237 |
<img src='https://github.com/neuroneural/brainchop/blob/master/css/logo/TReNDS_logo.jpg' width='300' height='100'></img> |
|
|
238 |
|
|
|
239 |
**Mohamed Masoud - Sergey Plis - 2024** |
|
|
240 |
</div> |