|
a |
|
b/docs/source/introduction.rst |
|
|
1 |
.. _introduction: |
|
|
2 |
|
|
|
3 |
**This guide is still under construction** |
|
|
4 |
|
|
|
5 |
Introduction |
|
|
6 |
================================================================================ |
|
|
7 |
DOSMA is an open-source Python library and application for medical image analysis. |
|
|
8 |
|
|
|
9 |
DOSMA is designed to streamline medical image analysis by standardizing medical image |
|
|
10 |
I/O, simplifying array-like operations on medical images, and deploying state-of-the-art |
|
|
11 |
image analysis algorithms. Because DOSMA is a framework, it is built to be flexible enough |
|
|
12 |
to write analysis protocols that can be run for different imaging modalities and scan sequences. |
|
|
13 |
|
|
|
14 |
For example, we can build the analysis workflow for a combination |
|
|
15 |
of quantitative DESS, CubeQuant (3D fast spin echo), and ultra-short echo time Cones scans for multiple patients |
|
|
16 |
(shown below) can be done in 7 lines of code: |
|
|
17 |
|
|
|
18 |
.. figure:: figures/workflow.png |
|
|
19 |
:align: center |
|
|
20 |
:alt: Example workflow for analyzing multiple scans per patient |
|
|
21 |
:figclass: align-center |
|
|
22 |
|
|
|
23 |
Example quantitative knee MRI workflow for analyzing 1. quantitative DESS (qDESS), |
|
|
24 |
a |T2|-weighted sequence, 2. CubeQuant, a |T1rho|-weighted sequence, and 3. ultra-short echo |
|
|
25 |
time (UTE) Cones, a |T2star| weighted sequence. |
|
|
26 |
|
|
|
27 |
Workflow |
|
|
28 |
-------------------------------------------------------------------------------- |
|
|
29 |
DOSMA uses various modules to handle MSK analysis for multiple scan types and tissues: |
|
|
30 |
|
|
|
31 |
- **Scan** modules declare scan-specific actions (fitting, segmentation, registration, etc). |
|
|
32 |
- **Tissue** modules handle visualization and analysis optimized for different tissues. |
|
|
33 |
- **Analysis** modules abstract different methods for performing different actions (different segmentation methods, fitting methods, etc.) |
|
|
34 |
|
|
|
35 |
**Note**: DOSMA is still in beta, and APIs are subject to change. |
|
|
36 |
|
|
|
37 |
Features |
|
|
38 |
-------------------------------------------------------------------------------- |
|
|
39 |
|
|
|
40 |
Dynamic Input/Output (I/O) |
|
|
41 |
^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|
|
42 |
Reading and writing medical images relies on standardized data formats. |
|
|
43 |
The Digital Imaging and Communications in Medicine (DICOM) format has been the international |
|
|
44 |
standard for medical image I/O. However, header information is memory intensive and |
|
|
45 |
and may not be useful in cases where only volume information is desired. |
|
|
46 |
|
|
|
47 |
The Neuroimaging Informatics Technology Initiative (NIfTI) format is useful in these cases. |
|
|
48 |
It stores only volume-specific header information (rotation, position, resolution, etc.) with |
|
|
49 |
the volume. |
|
|
50 |
|
|
|
51 |
DOSMA supports the use of both formats. However, because NIfTI headers do not contain relevant scan |
|
|
52 |
information, it is not possible to perform quantitative analysis that require this information. |
|
|
53 |
Therefore, we recommend using DICOM inputs, which is the standard output of acquisition systems, |
|
|
54 |
when starting processing with DOSMA. |
|
|
55 |
|
|
|
56 |
By default, volumes (segmentations, quantitative maps, etc.) are written in the NIfTI format. |
|
|
57 |
The default output file format can be changed in the :ref:`preferences <faq-citation>`. |
|
|
58 |
|
|
|
59 |
Array-Like Medical Images |
|
|
60 |
^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|
|
61 |
Medical images are spatially-aware pixel arrays with metadata. DOSMA supports array-like |
|
|
62 |
operations (arithmetic, slicing, etc.) on medical images while preserving spatial attributes and |
|
|
63 |
accompanying metadata with the :class:`MedicalVolume` data structure. It also supports intelligent |
|
|
64 |
reformatting, fast low-level computations, and native GPU support. |
|
|
65 |
|
|
|
66 |
|
|
|
67 |
Disclaimers |
|
|
68 |
-------------------------------------------------------------------------------- |
|
|
69 |
|
|
|
70 |
Using Deep Learning |
|
|
71 |
^^^^^^^^^^^^^^^^^^^ |
|
|
72 |
All weights/parameters trained for any task are likely to be most closely correlated to data used for training. |
|
|
73 |
If scans from a particular sequence were used for training, the performance of those weights are likely optimized |
|
|
74 |
for that specific scan prescription (resolution, TR/TE, etc.). As a result, they may not perform as well on segmenting images |
|
|
75 |
acquired using different scan types. |
|
|
76 |
|
|
|
77 |
If you do train weights for any deep learning task that you would want to include as part of this repo, please provide |
|
|
78 |
a link to those weights and detail the scanning parameters/sequence used to acquire those images. |
|
|
79 |
|
|
|
80 |
.. Substitutions |
|
|
81 |
.. |T2| replace:: T\ :sub:`2` |
|
|
82 |
.. |T1| replace:: T\ :sub:`1` |
|
|
83 |
.. |T1rho| replace:: T\ :sub:`1`:math:`{\rho}` |
|
|
84 |
.. |T2star| replace:: T\ :sub:`2`:sup:`*` |