Switch to unified view

a b/vignettes/installing-ichseg-getting-started.Rmd
1
---
2
title: "Installing ichseg and Getting Started"
3
author: "Vignette Author"
4
date: "`r Sys.Date()`"
5
output: rmarkdown::html_vignette
6
vignette: >
7
  %\VignetteIndexEntry{Installing ichseg and Getting Started}
8
  %\VignetteEngine{knitr::rmarkdown}
9
  %\VignetteEncoding{UTF-8}
10
---
11
12
```{r}
13
knitr::opts_chunk$set(eval = FALSE)
14
```
15
16
# Installation and Setup 
17
18
R is used for most of the computations but there are external dependencies required for the packages needed to get this working.
19
Most of my this software requires a Linux/*nix machine.
20
21
If you're using a Docker/Singularity container, I suggest using Debian and Neurodebian (http://neuro.debian.net/).  
22
23
## External Dependencies
24
### R Package Setup
25
26
```{r, engine = "bash"}
27
sudo apt-get install r-base r-base-dev
28
```
29
30
In `R`:
31
```{r}
32
install.packages("devtools")
33
```
34
35
### FSL
36
FSL (https://fsl.fmrib.ox.ac.uk/fsl/fslwiki) must be installed for the fslr package (see https://github.com/muschellij2/fslr for additional Neurodebian setup).
37
38
After setting up the `apt-key`s sufficiently, the below may install FSL to work with `fslr` (not guaranteed).
39
40
```{r, engine = "bash"}
41
sudo apt-get install fsl-complete
42
43
FSLDIR=/usr/local/fsl
44
FSLSHARE=/usr/share/data
45
46
mkdir -p ${FSLDIR}/bin && cp /usr/lib/fsl/5.0/* ${FSLDIR}/bin/
47
mkdir -p ${FSLDIR}/data/standard && mkdir -p ${FSLDIR}/data/atlases 
48
49
50
#######################################
51
# Setting things up like other installers
52
#######################################
53
cp -R ${FSLSHARE}/fsl-mni152-templates/* ${FSLDIR}/data/standard/
54
55
# setting up atlases
56
cp -R ${FSLSHARE}/harvard-oxford-atlases/* ${FSLDIR}/data/atlases/ 
57
cp -R ${FSLSHARE}/juelich-histological-atlas/* ${FSLDIR}/data/atlases/ 
58
cp -R ${FSLSHARE}/bangor-cerebellar-atlas/* ${FSLDIR}/data/atlases/ 
59
cp -R ${FSLSHARE}/jhu-dti-whitematter-atlas/* ${FSLDIR}/data/atlases/ 
60
cp -R ${FSLSHARE}/forstmann-subthalamic-nucleus-atlas/* ${FSLDIR}/data/atlases/ 
61
cp -R ${FSLSHARE}/fsl-resting-connectivity-parcellation-atlases/* ${FSLDIR}/data/atlases/ 
62
cp -R ${FSLSHARE}/mni-structural-atlas/* ${FSLDIR}/data/atlases/ 
63
cp -R ${FSLSHARE}/oxford-thalamic-connectivity-atlas/* ${FSLDIR}/data/atlases/ 
64
cp -R ${FSLSHARE}/talairach-daemon-atlas/* ${FSLDIR}/data/atlases/ 
65
66
echo "export LD_LIBRARY_PATH=/usr/lib/fsl/5.0:$LD_LIBRARY_PATH" >> ~/.profile
67
echo "export LD_LIBRARY_PATH=/usr/lib/fsl/5.0:$LD_LIBRARY_PATH" >> ~/.bash_profile
68
```
69
70
### ANTsR and ITK-based software
71
72
The `ichseg` package relies upon `extrantsr`, which relies on `ANTsR`, `ANTsRCore`, and `ITKR`, which are largely powerful packages that rely upon the [ITK](https://itk.org) software.  
73
74
These rely on `git` and `cmake`, so they must be installed:
75
76
```{r, engine = "bash"}
77
sudo apt-get git-core
78
sudo apt-get cmake
79
```
80
81
82
In `R`:
83
84
```{r}
85
devtools::install_github("muschellij2/ITKR")
86
devtools::install_github("muschellij2/ANTsRCore")
87
devtools::install_github("muschellij2/ANTsR")
88
```
89
90
91
An easier way to install these packages is likely to use the binaries
92
#### OS X Binaries
93
94
The links for the OSX binaries are at:
95
```
96
https://github.com/muschellij2/ITKR/releases/download/v0.4.12.4/ITKR_0.4.12.4.tgz
97
https://github.com/muschellij2/ANTsRCore/releases/download/v0.4.2.1/ANTsRCore_0.4.2.1.tgz
98
https://github.com/muschellij2/ANTsR/releases/download/v0.6.2/ANTsR_0.6.2.tgz
99
```
100
101
#### Linux Binaries
102
103
The links for the Linux binaries are at:
104
```
105
https://github.com/muschellij2/ITKR/releases/download/v0.4.12.4/ITKR_0.4.12.4_R_x86_64-pc-linux-gnu.tar.gz
106
https://github.com/muschellij2/ANTsRCore/releases/download/v0.4.2.1/ANTsRCore_0.4.2.1_R_x86_64-pc-linux-gnu.tar.gz
107
https://github.com/muschellij2/ANTsR/releases/download/v0.6.2/ANTsR_0.6.2_R_x86_64-pc-linux-gnu.tar.gz
108
```
109
110
111
### Installing ichseg
112
113
The main R package that does the ICH segmentation in CT is `ichseg`:
114
https://github.com/muschellij2/ichseg.  After the 3 packages above are installed, you are ready to install the main pacakge `ichseg`
115
```{r}
116
devtools::install_github("muschellij2/extrantsr", upgrade_dependencies = FALSE)
117
devtools::install_github("muschellij2/ichseg", upgrade_dependencies = FALSE)
118
```
119
120
121
# Workflow 
122
123
Here we will have some data (DICOM format), that is unsorted and there are multiple pieces of data in there (such MRI scans, localizer scans, CTA, etc.).
124
125
## Sorting DICOM data (not solved)
126
127
Have a folder of DICOM data.  There can be multiple images in there, they will be sorted in the following steps.
128
129
We use the `tractor.base::sortDicomDirectories` function.  We need at least a specific version for sorting.  
130
131
```{r}
132
if (!("tractor.base" %in% installed.packages())) {
133
  install.packages("tractor.base")
134
}
135
tractor_version = packageVersion("tractor.base")
136
if (compareVersion(as.character(tractor_version), "3.1.3") < 0) {
137
  devtools::install_github(
138
    "tractor/tractor", 
139
    subdir = "tractor.base")
140
}
141
```
142
143
Now that you have the package installed, you should run the following steps for DICOM sorting (where you replace `"/path/to/dicom/files"` with the relevant directory):
144
145
```{r}
146
dicom_directory = "/path/to/dicom/files"
147
before_run = list.dirs(dir, recursive = FALSE)
148
149
# find all zip files - uncompress them, then delete zip files
150
all_zip = list.files(
151
  path = dir,
152
  pattern = "[.]zip$",
153
  recursive = TRUE, full.names = TRUE)
154
if (length(all_zip) > 0) {
155
  file.remove(all_zip)
156
}
157
158
all_zip = list.files(
159
  path = dir,
160
  pattern = "[.]rar$",
161
  recursive = TRUE, full.names = TRUE)
162
if (length(all_zip) > 0) {
163
  file.remove(all_zip)
164
}
165
166
# sort the data
167
res = tractor.base::sortDicomDirectories(
168
  directories = dicom_directory, 
169
  deleteOriginals = TRUE,
170
  ignoreTransferSyntax = TRUE
171
  )
172
173
# remove old directories
174
after_run = list.dirs(dicom_directory, recursive = FALSE)
175
new_dirs = setdiff(after_run, before_run)
176
old_dirs = intersect(after_run, before_run)
177
178
unlink(old_dirs, recursive = TRUE)
179
```
180
181
All files with the ending `.zip` will be deleted (sometimes they are duplicated).  If you want to keep these, I recommend using the `utils::unzip` command in R previous to running this.  The data will be copies, sorted, and the old data will be deleted.  
182
183
The structure of the directory specified in `dicom_directory` will be sorted based on Series (by default), based on Series unique ID (UID) based on DICOM tag 0x0020,0x000e by default.
184
185
### Subsetting: Not completed 
186
187
Now that the dat has been sorted, the relevant data can be subset.   The data for the PItCHPERFECT model requires the data be non-contrast CT data.  This means removing anything of the imaging modality MR (MRIs), CTAs (CT angiograms), and a slew of derived images (such as screen saves, dose reports, localizers, and 3D reconstructions).  
188
189
These can be subset using the DICOM header information:
190
191
* `Modality`: (0008,0060) tag
192
* `ImageType`: (0008,0008) tag  
193
* `Frame Type`: (0008,9007) tag
194
* `ConvolutionKernel`: (0018,1210) tag (Required if Frame Type (0008,9007) Value 1 of this frame is ORIGINAL. May be present otherwise.)
195
* `Convolution Kernel Group`: (0018,9316) tag
196
* `X-ray Tube Current`:  (0018,1151) tag X-ray Tube Current in mA.  
197
* `Exposure Time`: (0018,1150) tag Time of x-ray exposure in msec   
198
199
With this information, we will start removing unnecessary series.  We will use the `dcmtk` package for this:
200
201
```{r}
202
if (!("dcmtk" %in% installed.packages())) {
203
  devtools::install_github("muschellij2/dcmtk")
204
} else {
205
  dcmtk_ver = packageVersion("dcmtk")
206
  if (dcmtk_ver < "0.5.5") {
207
    devtools::install_github("muschellij2/dcmtk")
208
  }  
209
}
210
library(dcmtk)
211
```
212
213
We are reading in all the header information from each DICOM using the `dcmtk::read_dicom_header` function:
214
```{r}
215
n_dirs = length(new_dirs)
216
all_data = vector(mode = "list", length = n_dirs)
217
for (i in seq(n_dirs)) {
218
  basedir = new_dirs[i]
219
  hdr = dcmtk::read_dicom_header(file = paste0(basedir, "/*"))
220
  hdr$dir = basedir
221
  all_data[[i]] = hdr
222
}
223
```
224
225
NB: this data contains all the header information, not just those fields specified above, including protected health information (PHI).
226
227
```{r}
228
library(dplyr)
229
all_hdr = dplyr::bind_rows(all_data)
230
keep_tags = c("(0008,0008)", "(0008,0060)", "(0018,1210)",
231
              "(0018,1160)", "(0018,1151)", "(0018,0081)",
232
              "(0018,1150)", "(0018,0080)", "(0008,9007)",
233
              "(0018,9316)")
234
sub_hdr = all_hdr %>% 
235
  filter(tag %in% keep_tags) %>% 
236
  select(file, name, value)
237
```
238
239
## Converting DICOM to NIfTI data
240
241
Once we have a directory of DICOM files, we can convert them using to NIfTI using the DICOM to NIfTI converter [dcm2nii](https://www.nitrc.org/projects/dcm2nii/).  We use this through the `dcm2niir` package
242
243
The current workflow is to convert a directory (`directory_of_DICOMS`):
244
245
```{r, eval = FALSE}
246
library(dcm2niir)
247
248
out = dcm2nii(basedir = directory_of_DICOMS)
249
res_file = check_dcm2nii(out)
250
```
251
252
## Ensuring HU scale
253
254
We then read in the file, make sure it's within the standard range of HU and then write it out.
255
256
257
```{r, eval = FALSE}
258
library(neurobase)
259
####################################  
260
# window the image
261
####################################
262
window = c(-1024, 3071)
263
img = readnii(res_file)
264
img = window_img(img, window = window)
265
img = cal_img(img)
266
scl_slope(img) = 1
267
scl_inter(img) = 0
268
aux_file(img) = ""
269
descrip(img) = ""
270
writenii(img, filename = res_file)
271
```
272
273
## ICH Segmentation
274
275
This file can be passed into `ichseg::ich_segment`.  
276
277
```{r, eval = FALSE}
278
results = ichseg::ich_segment(res_file)
279
```
280
281
282
### Resampling to 1x1x1
283
284
In order to keep the dimensions of voxels the same, we can rigidly register to a template (which is done in `ich_segment`).
285
286
You can also resample the image to a 1x1x1$mm$ image:
287
288
```{r}
289
library(ANTsRCore)
290
img = antsImageRead(res_file)
291
res_img = resampleImage(img, resampleParams = c(1,1,1),
292
  useVoxels = FALSE)
293
```
294
295
The images should be on the same grid but not registered and not necessarily oriented the same way
296
297
298